diff --git "a/blogs_splitted_dataset.jsonl" "b/blogs_splitted_dataset.jsonl"
--- "a/blogs_splitted_dataset.jsonl"
+++ "b/blogs_splitted_dataset.jsonl"
@@ -1,5150 +1,1958 @@
-{"page_content": "---\nlayout: blog_detail\ntitle: \"PyTorch Trace Analysis for the Masses\"\nauthor: Anupam Bhatnagar, Xizhou Feng, Brian Coutinho, Yifan Liu, Sung-Han Lin, Louis Feng, and Yuzhen Huang\n---", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "We are excited to announce the public release of Holistic Trace Analysis (HTA), an open source performance analysis and visualization Python library for PyTorch users. HTA takes as input [Kineto traces](https://github.com/pytorch/kineto) collected by the [PyTorch profiler](https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/), which are complex and challenging to interpret, and up-levels the performance information contained in these traces. It was initially developed", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "internally at Meta to understand and debug performance problems for large-scale distributed training jobs on GPUs. The multidisciplinary team has made a number of enhancements to HTA\u2019s features and scaled them to support state-of-the-art ML workloads.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "ML researchers and systems engineers often struggle to computationally scale up their models because they are not aware of the performance bottlenecks in their workloads. The resources requested for a job (e.g. GPUs, memory) are often misaligned with the resources actually required due to lack of visibility \u201cunder the hood\u201d. To achieve the best performance from the hardware stack, it is imperative to understand the resource utilization and bottlenecks for distributed training workloads.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "The initial HTA implementation was specifically targeted at Deep Learning Based Recommendation Models (DLRM). To make the features in HTA generic and applicable to use cases such as analyzing Vision and NLP models, we decided to refactor the HTA codebase and make the library available to the larger community. This new codebase has implemented several important ideas which lead to significant efficiency and performance improvements.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"PyTorch Trace Analysis for the Masses\"\nauthor: Anupam Bhatnagar, Xizhou Feng, Brian Coutinho, Yifan Liu, Sung-Han Lin, Louis Feng, and Yuzhen Huang\n---\n\nWe are excited to announce the public release of Holistic Trace Analysis (HTA), an open source performance analysis and visualization Python library for PyTorch users. HTA takes as input [Kineto traces](https://github.com/pytorch/kineto) collected by the [PyTorch profiler](https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/), which are complex and challenging to interpret, and up-levels the performance information contained in these traces. It was initially developed internally at Meta to understand and debug performance problems for large-scale distributed training jobs on GPUs. The multidisciplinary team has made a number of enhancements to HTA\u2019s features and scaled them to support state-of-the-art ML workloads.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
+{"page_content": "ML researchers and systems engineers often struggle to computationally scale up their models because they are not aware of the performance bottlenecks in their workloads. The resources requested for a job (e.g. GPUs, memory) are often misaligned with the resources actually required due to lack of visibility \u201cunder the hood\u201d. To achieve the best performance from the hardware stack, it is imperative to understand the resource utilization and bottlenecks for distributed training workloads.\n\nThe initial HTA implementation was specifically targeted at Deep Learning Based Recommendation Models (DLRM). To make the features in HTA generic and applicable to use cases such as analyzing Vision and NLP models, we decided to refactor the HTA codebase and make the library available to the larger community. This new codebase has implemented several important ideas which lead to significant efficiency and performance improvements.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
{"page_content": "In this blog, we present several features implemented in the open source version of HTA, which can be used as a Python script as well as interactively in a Jupyter notebook. HTA provides the following features:", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "1. **Breakdown by Dimensions**\n 1. **Temporal**: Breakdown of GPU time in terms of time spent in computation, communication, memory events, and idle time on a single node and across all ranks.\n 1. **Idle Time**: Breakdown of GPU idle time into waiting for the host, waiting for another kernel or attributed to an unknown cause.\n 1. **Kernel**: Find kernels with the longest duration on each rank.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "1. **Communication Computation Overlap**: Calculate the percentage of time when communication overlaps computation.\n1. **Statistical Analysis**\n 1. **Kernel Duration Distribution**: Distribution of average time taken by longest kernels across different ranks.\n 1. **CUDA Kernel Launch**: Distributions of GPU kernels with very small duration, large duration, and excessive launch time.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "1. **Augmented Counters (Memory bandwidth, Queue length)**: Augmented trace files which provide insights into memory copy bandwidth and number of outstanding operations on each CUDA stream.\n1. **Patterns**\n 1. **Frequent CUDA Kernels**: Find the CUDA kernels most frequently launched by any given PyTorch or user defined operator.\n1. **Trace Comparison**\n 1. **Trace Diff**: A trace comparison tool to identify and visualize the differences between traces.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "HTA source code is available to users via [Github](https://github.com/facebookresearch/HolisticTraceAnalysis). Users can request new features or build their own analysis using the core libraries and data structures provided in the codebase in addition to the features mentioned above.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "GPU Training Performance Debugging 101\n\nTo understand the GPU performance in distributed training jobs, we consider how the model operators interact with the GPU devices and how such interactions are reflected in certain measurable metrics.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "At a high level, we can break down the GPU operations in a model execution into three broad categories, henceforth referred to as kernel types: \n1. **Computation (COMP)** - Compute kernels execute compiled routines for matrix multiplication and similar numeric calculations. They are responsible for all of the number-crunching necessary for model execution.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "1. **Communication (COMM)** - Communication kernels are routines which are responsible for exchanging and synchronizing data between different GPU devices in a distributed training job. The NVIDIA Collective Communication Library (NCCL) is a widely used communication library and all its kernels have the prefix \u201cnccl\u201d. Example NCCL kernels include NCCL_AllGather, NCCL_ReduceScatter, NCCL_AllReduce, etc.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "1. **Memory (MEM)** - Memory kernels manage the memory allocations/deallocations on the GPU devices and data movement between the memory space on the host and the GPUs. The memory kernels include Memcpy_H2D, Memcpy_D2H, Memcpy_D2D, Memset, etc. Here, H represents the Host and D represents the GPU Device. Thus, H2D, D2H, D2D stands for Host to Device, Device to Host and Device to Device respectively.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "Because a modern GPU device like the NVIDIA A100 GPU is a massively parallel device which is capable of running multiple kernels simultaneously, it is possible to overlap the computation, communication, and memory kernels to reduce the model execution time. One common technique to achieve the overlap is to utilize multiple CUDA streams. A CUDA stream is a sequence of operations that execute on a GPU device in the order in which they are issued by the host code. Different CUDA streams can be interleaved and", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "even run concurrently, thus achieving the effect of kernel overlap.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "To help understand the above concepts, Figure 1 provides a timeline of the GPU kernels in a sample distributed training job on 8 GPUs for one iteration. In the figure below, each rank represents one GPU and the kernels on each GPU run on 6 CUDA streams. In the right column of the figure, you can see names of the GPU kernels used. In the middle of the figure, you see the overlap between compute and communicate kernels. This figure is created using the [plot_timeline example", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "notebook](https://github.com/facebookresearch/HolisticTraceAnalysis/blob/main/examples/plot_timeline.ipynb) available in HTA.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "{:width=\"100%\"}\n\n*Figure 1. An example of the execution timeline of GPU Kernels across multiple ranks*\n{: style=\"text-align: center;\"}", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "The performance of multiple GPU training jobs is affected by multiple factors. Among these factors, how does a model execution create and orchestrate the GPU kernels plays a critical role. HTA provides insights on how the model execution interacts with the GPU devices and highlights the opportunities for performance improvement.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "With the features we built in HTA, we aim to provide users insights into \u201cwhat is happening under the hood in a distributed GPU training?\u201d We briefly describe these features in the next few paragraphs.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "Features in Holistic Trace Analysis \n\nFor most users, understanding the performance of GPU training jobs is nontrivial. Thus, we built this library to simplify the task of trace analysis and provide the user useful insights by examining the model execution traces. As the first step, we developed features which are important and generic enough so that most users can benefit from this library.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "**Temporal Breakdown**: We begin by asking whether the GPU is spending time on computation, communication, memory events, or is it idle? To answer this question, the temporal breakdown feature presents a breakdown in terms of these categories. To achieve high training efficiency the code should maximize time used by computation kernels and minimize idle time and non-compute time (time used by communication or memory kernels). This is accomplished by implementing concurrent execution of computation kernels", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "with communication or memory kernels. *Note that, during concurrent execution of computation kernels with communication/memory kernels the time spent by communication/memory kernels is accounted for under compute time.*", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "{:width=\"100%\"}\n\n*Figure 2: Temporal Breakdown across 8 GPUs*\n{: style=\"text-align: center;\"}\n\n**Kernel Breakdown**: It is natural to ask which kernels are taking the most amount of time. The next feature breaks down the time spent within each kernel type (COMM, COMP, MEM) and sorts them by duration. We present this information for each kernel type and for each rank as a pie chart. See figure 3 below.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "{:width=\"100%\"}\n\n*Figure 3: Pie chart of top computation and communication kernels*\n{: style=\"text-align: center;\"}", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "**Kernel Duration Distribution**: Subsequently, one can also ask - for any given kernel, what is the distribution of the time spent across the ranks? To answer this, HTA generates bar graphs for the average duration of a given kernel across all ranks. Additionally, the error bars in the bar graphs show the minimum and maximum amount of time taken by a given kernel on a given rank. Figure 4 below shows a discrepancy between average duration on rank 0 as compared to other ranks. This anomalous behavior on", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "rank 0 guides the user on where to look for possible bugs.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "{:width=\"100%\"}\n\n*Figure 4: Average duration of NCCL AllReduce Kernel across 8 ranks*\n{: style=\"text-align: center;\"}", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "**Communication Computation Overlap**: In distributed training, a significant amount of time is spent in communication and synchronization events among multiple GPU devices. To achieve high GPU efficiency (i.e. TFLOPS/GPU) it is vital to keep the GPU doing actual computation work. In other words, a GPU should not be blocked because of waiting for data from other GPUs. One way to measure the extent to which computation is blocked by data dependencies is to calculate the computation-communication overlap.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "Higher GPU efficiency is observed if communication events overlap computation events. Lack of communication and computation overlap will lead to the GPU being idle, thus the efficiency would be low. Thus, the communication computation overlap feature calculates the percentage of time communication and computation overlap in a job for each rank and generates a bar graph representation. See figure below. More precisely, we measure the following ratio", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "(time spent in computation while communicating) / (time spent in communication)\n{: style=\"text-align: center;\"}\n\n\n{:width=\"100%\"}\n\n*Figure 5: Communication computation overlap*\n{: style=\"text-align: center;\"}", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "**Augmented Counters (Queue length, Memory bandwidth)**: To aid in debugging, HTA calculates the memory bandwidth statistics for D2H, H2D and D2D memory copy (memcpy) and memory set (memset) events. Additionally, HTA also computes the number of outstanding CUDA operations on each CUDA stream. We refer to this as queue length. When the queue length on a stream is 1024 or larger new events cannot be scheduled on that stream and the CPU will stall until the GPU events have processed. Additionally, HTA", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "generates a new trace file containing tracks with the memory bandwidth and queue length time series. See Figure 6 below.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "{:width=\"100%\"}\n\n*Figure 6: Memory Bandwidth and Queue Length*\n{: style=\"text-align: center;\"}\n\nThese primary features give us a peek into the system performance and help answer \u201cwhat is happening in the system?\u201d. As HTA evolves, we hope to address \u201cwhy is X happening?\u201d and also suggest possible solutions to overcome the bottlenecks.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "Installation and Usage\n\n### Installation\n\nFor installing the HTA please refer to the [README](https://github.com/facebookresearch/HolisticTraceAnalysis/blob/main/README.md). In brief, the user is required to clone the [repo](https://github.com/facebookresearch/HolisticTraceAnalysis) and install the necessary Python packages via pip.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "Usage\n\nThis version of Holistic Trace Analysis is currently in beta and we recommend using HTA in a Jupyter notebook. A [demo notebook](https://github.com/facebookresearch/HolisticTraceAnalysis/blob/main/examples/trace_analysis_demo.ipynb) is provided for your convenience. To get started, import the hta package in a Jupyter notebook, create a TraceAnalysis object and off we go in exactly two lines of code.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "```python\nfrom hta.trace_analysis import TraceAnalysis\nanalyzer = TraceAnalysis(trace_dir = \u201c/trace/folder/path\u201d)\n```", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "Requirements\n\n- All trace files for a training or inference job must be stored in a unique folder.\n- Trace files are in json or gzipped json format.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "FAQ\n\n#### Q. How can I install HTA?\n\nPlease see the [README](https://github.com/facebookresearch/HolisticTraceAnalysis/blob/main/README.md) in the root directory of the repository.\n\n#### Q. Is there any documentation on the features and API in HTA?\n\nThe documentation and detailed API is available [here](https://hta.readthedocs.io/).", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "Q. Can you implement feature X?\n\nDepending on how widely the feature is needed and the level of effort required to implement it we would consider developing the feature. Please open a [Github Issue](https://github.com/facebookresearch/HolisticTraceAnalysis/issues) and tag it with the feature-request label.\n\n#### Q. Can I modify the code?\n\nPlease do and [send a PR](https://github.com/facebookresearch/HolisticTraceAnalysis/pulls) along the way, if you think it would be useful for others.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "Q. How can I collect traces in PyTorch?\n\nPlease refer to this tutorial [here](https://pytorch.org/tutorials/intermediate/tensorboard_profiler_tutorial.html#use-profiler-to-record-execution-events).\n\n#### Q. Can HTA be used at production scale?\n\nYes, please see a use case study [here](https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/).", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch adds new dev tools as it hits production scale'\nauthor: The PyTorch Team\n---\n\n_This is a partial re-post of the original blog post on the Facebook AI Blog. The full post can be [viewed here](https://ai.facebook.com/blog/pytorch-adds-new-dev-tools-as-it-hits-production-scale/)_", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "Since its release just a few months ago, [PyTorch 1.0](http://pytorch.org/) has been rapidly adopted as a powerful, flexible deep learning platform that enables engineers and researchers to move quickly from research to production. We are highlighting some of the ways the AI engineering and research community is using PyTorch 1.0. We\u2019re also sharing new details about the latest release, PyTorch 1.1, and showcasing some of the new development tools created by the community.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "Building on the initial launch of PyTorch in 2017, we partnered with the AI community to ship the stable release of PyTorch 1.0 [last December](https://code.fb.com/ai-research/pytorch-developer-ecosystem-expands-1-0-stable-release/). Along with enhanced production-oriented capabilities and deep integration with leading cloud platforms, PyTorch 1.0 expands on the open source library\u2019s core features, with the addition of PyTorch JIT (Just in time compilation) that seamlessly transitions between eager mode and", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "graph mode to provide both flexibility and speed.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "Leading businesses across industries are beginning to use PyTorch to both facilitate their research and then also deploy at large scale for applications such as translation, computer vision, conversational interfaces, pharmaceutical research, factory optimization, and automated driving research. Community adoption of PyTorch has also continued to expand. Stanford, UC Berkeley, Caltech, and other universities are using PyTorch as a fundamental tool for their machine learning (ML) courses; new ecosystem", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "projects have launched to support development on PyTorch; and major cloud platforms have expanded their integration with PyTorch.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "Using PyTorch across industries\n\nMany leading businesses are moving to PyTorch 1.0 to accelerate development and deployment of new AI systems. Here are some examples:", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "- Airbnb leveraged PyTorch's rich libraries and APIs for conversational AI and deployed a Smart Reply to help the company\u2019s service agents respond more effectively to customers.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "- [ATOM](https://atomscience.org/) is building a platform to generate and optimize new drug candidates significantly faster and with greater success than conventional processes. Using machine learning frameworks such as PyTorch, ATOM was able to design a variational autoencoder for representing diverse chemical structures and designing new drug candidates.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "- Genentech is utilizing PyTorch\u2019s flexible control structures and dynamic graphs to train deep learning models that will aid in the development of individualized cancer therapy.\n- Microsoft is using PyTorch across its organization to develop ML models at scale and deploy them via the ONNX Runtime. Using PyTorch, Microsoft Cognition has built distributed language models that scale to billions of words and are now in production in offerings such as Cognitive Services.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "- Toyota Research Institute (TRI) is developing a two-pronged approach toward automated driving with Toyota Guardian and Toyota Chauffeur technologies. The Machine Learning Team at TRI is creating new deep learning algorithms to leverage Toyota's 10 million sales per year data advantage. The flexibility of PyTorch has vastly accelerated their pace of exploration and its new production features will enable faster deployment towards their safety critical applications.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
+{"page_content": "1. **Breakdown by Dimensions**\n 1. **Temporal**: Breakdown of GPU time in terms of time spent in computation, communication, memory events, and idle time on a single node and across all ranks.\n 1. **Idle Time**: Breakdown of GPU idle time into waiting for the host, waiting for another kernel or attributed to an unknown cause.\n 1. **Kernel**: Find kernels with the longest duration on each rank.\n 1. **Communication Computation Overlap**: Calculate the percentage of time when communication overlaps computation.\n1. **Statistical Analysis**\n 1. **Kernel Duration Distribution**: Distribution of average time taken by longest kernels across different ranks.\n 1. **CUDA Kernel Launch**: Distributions of GPU kernels with very small duration, large duration, and excessive launch time.\n 1. **Augmented Counters (Memory bandwidth, Queue length)**: Augmented trace files which provide insights into memory copy bandwidth and number of outstanding operations on each CUDA stream.\n1. **Patterns**\n 1. **Frequent CUDA Kernels**: Find the CUDA kernels most frequently launched by any given PyTorch or user defined operator.\n1. **Trace Comparison**\n 1. **Trace Diff**: A trace comparison tool to identify and visualize the differences between traces.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
+{"page_content": "HTA source code is available to users via [Github](https://github.com/facebookresearch/HolisticTraceAnalysis). Users can request new features or build their own analysis using the core libraries and data structures provided in the codebase in addition to the features mentioned above.\n\n## GPU Training Performance Debugging 101\n\nTo understand the GPU performance in distributed training jobs, we consider how the model operators interact with the GPU devices and how such interactions are reflected in certain measurable metrics.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
+{"page_content": "At a high level, we can break down the GPU operations in a model execution into three broad categories, henceforth referred to as kernel types: \n1. **Computation (COMP)** - Compute kernels execute compiled routines for matrix multiplication and similar numeric calculations. They are responsible for all of the number-crunching necessary for model execution. \n1. **Communication (COMM)** - Communication kernels are routines which are responsible for exchanging and synchronizing data between different GPU devices in a distributed training job. The NVIDIA Collective Communication Library (NCCL) is a widely used communication library and all its kernels have the prefix \u201cnccl\u201d. Example NCCL kernels include NCCL_AllGather, NCCL_ReduceScatter, NCCL_AllReduce, etc. \n1. **Memory (MEM)** - Memory kernels manage the memory allocations/deallocations on the GPU devices and data movement between the memory space on the host and the GPUs. The memory kernels include Memcpy_H2D, Memcpy_D2H, Memcpy_D2D, Memset, etc. Here, H represents the Host and D represents the GPU Device. Thus, H2D, D2H, D2D stands for Host to Device, Device to Host and Device to Device respectively.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
+{"page_content": "Because a modern GPU device like the NVIDIA A100 GPU is a massively parallel device which is capable of running multiple kernels simultaneously, it is possible to overlap the computation, communication, and memory kernels to reduce the model execution time. One common technique to achieve the overlap is to utilize multiple CUDA streams. A CUDA stream is a sequence of operations that execute on a GPU device in the order in which they are issued by the host code. Different CUDA streams can be interleaved and even run concurrently, thus achieving the effect of kernel overlap.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
+{"page_content": "To help understand the above concepts, Figure 1 provides a timeline of the GPU kernels in a sample distributed training job on 8 GPUs for one iteration. In the figure below, each rank represents one GPU and the kernels on each GPU run on 6 CUDA streams. In the right column of the figure, you can see names of the GPU kernels used. In the middle of the figure, you see the overlap between compute and communicate kernels. This figure is created using the [plot_timeline example notebook](https://github.com/facebookresearch/HolisticTraceAnalysis/blob/main/examples/plot_timeline.ipynb) available in HTA.\n\n{:width=\"100%\"}\n\n*Figure 1. An example of the execution timeline of GPU Kernels across multiple ranks*\n{: style=\"text-align: center;\"}", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
+{"page_content": "The performance of multiple GPU training jobs is affected by multiple factors. Among these factors, how does a model execution create and orchestrate the GPU kernels plays a critical role. HTA provides insights on how the model execution interacts with the GPU devices and highlights the opportunities for performance improvement.\n\nWith the features we built in HTA, we aim to provide users insights into \u201cwhat is happening under the hood in a distributed GPU training?\u201d We briefly describe these features in the next few paragraphs.\n\n## Features in Holistic Trace Analysis \n\nFor most users, understanding the performance of GPU training jobs is nontrivial. Thus, we built this library to simplify the task of trace analysis and provide the user useful insights by examining the model execution traces. As the first step, we developed features which are important and generic enough so that most users can benefit from this library.", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
+{"page_content": "**Temporal Breakdown**: We begin by asking whether the GPU is spending time on computation, communication, memory events, or is it idle? To answer this question, the temporal breakdown feature presents a breakdown in terms of these categories. To achieve high training efficiency the code should maximize time used by computation kernels and minimize idle time and non-compute time (time used by communication or memory kernels). This is accomplished by implementing concurrent execution of computation kernels with communication or memory kernels. *Note that, during concurrent execution of computation kernels with communication/memory kernels the time spent by communication/memory kernels is accounted for under compute time.*\n\n{:width=\"100%\"}\n\n*Figure 2: Temporal Breakdown across 8 GPUs*\n{: style=\"text-align: center;\"}", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
+{"page_content": "**Kernel Breakdown**: It is natural to ask which kernels are taking the most amount of time. The next feature breaks down the time spent within each kernel type (COMM, COMP, MEM) and sorts them by duration. We present this information for each kernel type and for each rank as a pie chart. See figure 3 below. \n\n{:width=\"100%\"}\n\n*Figure 3: Pie chart of top computation and communication kernels*\n{: style=\"text-align: center;\"}", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
+{"page_content": "**Kernel Duration Distribution**: Subsequently, one can also ask - for any given kernel, what is the distribution of the time spent across the ranks? To answer this, HTA generates bar graphs for the average duration of a given kernel across all ranks. Additionally, the error bars in the bar graphs show the minimum and maximum amount of time taken by a given kernel on a given rank. Figure 4 below shows a discrepancy between average duration on rank 0 as compared to other ranks. This anomalous behavior on rank 0 guides the user on where to look for possible bugs.\n\n{:width=\"100%\"}\n\n*Figure 4: Average duration of NCCL AllReduce Kernel across 8 ranks*\n{: style=\"text-align: center;\"}", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
+{"page_content": "**Communication Computation Overlap**: In distributed training, a significant amount of time is spent in communication and synchronization events among multiple GPU devices. To achieve high GPU efficiency (i.e. TFLOPS/GPU) it is vital to keep the GPU doing actual computation work. In other words, a GPU should not be blocked because of waiting for data from other GPUs. One way to measure the extent to which computation is blocked by data dependencies is to calculate the computation-communication overlap. Higher GPU efficiency is observed if communication events overlap computation events. Lack of communication and computation overlap will lead to the GPU being idle, thus the efficiency would be low. Thus, the communication computation overlap feature calculates the percentage of time communication and computation overlap in a job for each rank and generates a bar graph representation. See figure below. More precisely, we measure the following ratio", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
+{"page_content": "(time spent in computation while communicating) / (time spent in communication)\n{: style=\"text-align: center;\"}\n\n\n{:width=\"100%\"}\n\n*Figure 5: Communication computation overlap*\n{: style=\"text-align: center;\"}\n\n**Augmented Counters (Queue length, Memory bandwidth)**: To aid in debugging, HTA calculates the memory bandwidth statistics for D2H, H2D and D2D memory copy (memcpy) and memory set (memset) events. Additionally, HTA also computes the number of outstanding CUDA operations on each CUDA stream. We refer to this as queue length. When the queue length on a stream is 1024 or larger new events cannot be scheduled on that stream and the CPU will stall until the GPU events have processed. Additionally, HTA generates a new trace file containing tracks with the memory bandwidth and queue length time series. See Figure 6 below.\n\n{:width=\"100%\"}", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
+{"page_content": "*Figure 6: Memory Bandwidth and Queue Length*\n{: style=\"text-align: center;\"}\n\nThese primary features give us a peek into the system performance and help answer \u201cwhat is happening in the system?\u201d. As HTA evolves, we hope to address \u201cwhy is X happening?\u201d and also suggest possible solutions to overcome the bottlenecks.\n\n## Installation and Usage\n\n### Installation\n\nFor installing the HTA please refer to the [README](https://github.com/facebookresearch/HolisticTraceAnalysis/blob/main/README.md). In brief, the user is required to clone the [repo](https://github.com/facebookresearch/HolisticTraceAnalysis) and install the necessary Python packages via pip.\n\n### Usage", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
+{"page_content": "### Usage\n\nThis version of Holistic Trace Analysis is currently in beta and we recommend using HTA in a Jupyter notebook. A [demo notebook](https://github.com/facebookresearch/HolisticTraceAnalysis/blob/main/examples/trace_analysis_demo.ipynb) is provided for your convenience. To get started, import the hta package in a Jupyter notebook, create a TraceAnalysis object and off we go in exactly two lines of code.\n\n```python\nfrom hta.trace_analysis import TraceAnalysis\nanalyzer = TraceAnalysis(trace_dir = \u201c/trace/folder/path\u201d)\n```\n\n### Requirements\n\n- All trace files for a training or inference job must be stored in a unique folder.\n- Trace files are in json or gzipped json format.\n\n## FAQ\n\n#### Q. How can I install HTA?\n\nPlease see the [README](https://github.com/facebookresearch/HolisticTraceAnalysis/blob/main/README.md) in the root directory of the repository.\n\n#### Q. Is there any documentation on the features and API in HTA?", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
+{"page_content": "The documentation and detailed API is available [here](https://hta.readthedocs.io/).\n\n#### Q. Can you implement feature X?\n\nDepending on how widely the feature is needed and the level of effort required to implement it we would consider developing the feature. Please open a [Github Issue](https://github.com/facebookresearch/HolisticTraceAnalysis/issues) and tag it with the feature-request label.\n\n#### Q. Can I modify the code?\n\nPlease do and [send a PR](https://github.com/facebookresearch/HolisticTraceAnalysis/pulls) along the way, if you think it would be useful for others.\n\n#### Q. How can I collect traces in PyTorch?\n\nPlease refer to this tutorial [here](https://pytorch.org/tutorials/intermediate/tensorboard_profiler_tutorial.html#use-profiler-to-record-execution-events).\n\n#### Q. Can HTA be used at production scale?\n\nYes, please see a use case study [here](https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/).", "metadata": {"source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch adds new dev tools as it hits production scale'\nauthor: The PyTorch Team\n---\n\n_This is a partial re-post of the original blog post on the Facebook AI Blog. The full post can be [viewed here](https://ai.facebook.com/blog/pytorch-adds-new-dev-tools-as-it-hits-production-scale/)_\n\nSince its release just a few months ago, [PyTorch 1.0](http://pytorch.org/) has been rapidly adopted as a powerful, flexible deep learning platform that enables engineers and researchers to move quickly from research to production. We are highlighting some of the ways the AI engineering and research community is using PyTorch 1.0. We\u2019re also sharing new details about the latest release, PyTorch 1.1, and showcasing some of the new development tools created by the community.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
+{"page_content": "Building on the initial launch of PyTorch in 2017, we partnered with the AI community to ship the stable release of PyTorch 1.0 [last December](https://code.fb.com/ai-research/pytorch-developer-ecosystem-expands-1-0-stable-release/). Along with enhanced production-oriented capabilities and deep integration with leading cloud platforms, PyTorch 1.0 expands on the open source library\u2019s core features, with the addition of PyTorch JIT (Just in time compilation) that seamlessly transitions between eager mode and graph mode to provide both flexibility and speed.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
+{"page_content": "Leading businesses across industries are beginning to use PyTorch to both facilitate their research and then also deploy at large scale for applications such as translation, computer vision, conversational interfaces, pharmaceutical research, factory optimization, and automated driving research. Community adoption of PyTorch has also continued to expand. Stanford, UC Berkeley, Caltech, and other universities are using PyTorch as a fundamental tool for their machine learning (ML) courses; new ecosystem projects have launched to support development on PyTorch; and major cloud platforms have expanded their integration with PyTorch.\n\n## Using PyTorch across industries\n\nMany leading businesses are moving to PyTorch 1.0 to accelerate development and deployment of new AI systems. Here are some examples:", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
+{"page_content": "- Airbnb leveraged PyTorch's rich libraries and APIs for conversational AI and deployed a Smart Reply to help the company\u2019s service agents respond more effectively to customers.\n- [ATOM](https://atomscience.org/) is building a platform to generate and optimize new drug candidates significantly faster and with greater success than conventional processes. Using machine learning frameworks such as PyTorch, ATOM was able to design a variational autoencoder for representing diverse chemical structures and designing new drug candidates.\n- Genentech is utilizing PyTorch\u2019s flexible control structures and dynamic graphs to train deep learning models that will aid in the development of individualized cancer therapy.\n- Microsoft is using PyTorch across its organization to develop ML models at scale and deploy them via the ONNX Runtime. Using PyTorch, Microsoft Cognition has built distributed language models that scale to billions of words and are now in production in offerings such as Cognitive Services.\n- Toyota Research Institute (TRI) is developing a two-pronged approach toward automated driving with Toyota Guardian and Toyota Chauffeur technologies. The Machine Learning Team at TRI is creating new deep learning algorithms to leverage Toyota's 10 million sales per year data advantage. The flexibility of PyTorch has vastly accelerated their pace of exploration and its new production features will enable faster deployment towards their safety critical applications.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
{"page_content": "Following the release of PyTorch 1.0 in December 2018, we\u2019re now announcing the availability of v1.1, which improves performance, adds new model understanding and visualization tools to improve usability, and provides new APIs.\n\nKey features of PyTorch v1.1 include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "- [TensorBoard](https://www.tensorflow.org/tensorboard): First-class and native support for visualization and model debugging with TensorBoard, a web application suite for inspecting and understanding training runs and graphs. PyTorch now natively supports TensorBoard with a simple \u201cfrom torch.utils.tensorboard import SummaryWriter\u201d command.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "- JIT compiler: Improvements to just-in-time (JIT) compilation. These include various bug fixes as well as expanded capabilities in TorchScript, such as support for dictionaries, user classes, and attributes.\n- New APIs: Support for Boolean tensors and better support for custom recurrent neural networks.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "- Distributed Training: Improved performance for common models such as CNNs, added support for multi device modules including the ability to split models across GPUs while still using Distributed Data Parallel (DDP) and support for modules where not all parameters are used in every iteration (e.g. control flow, like adaptive softmax, etc). See the latest tutorials [here](https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "We\u2019ve also continued to partner with the community to foster projects and tools aimed at supporting ML engineers for needs ranging from improved model understanding to auto-tuning using AutoML methods. With the release of Ax and BoTorch (below), we will be sharing some of our core algorithms, including meta-learning for efficiently optimizing hyperparameters from based on historical tasks. We are excited to see this work open-sourced for the community to build on.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "This ecosystem includes open source projects and tools that have been deployed at production scale, as well as products and services from our partnership with industry leaders who share our vision of an open and collaborative AI community. Here are a few of the latest tools:", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "- [BoTorch](https://ai.facebook.com/blog/open-sourcing-ax-and-botorch-new-ai-tools-for-adaptive-experimentation/): BoTorch is a research framework built on top of PyTorch to provide Bayesian optimization, a sample-efficient technique for sequential optimization of costly-to-evaluate black-box functions.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "- [Ax](https://ai.facebook.com/blog/open-sourcing-ax-and-botorch-new-ai-tools-for-adaptive-experimentation/): Ax is an ML platform for managing adaptive experiments. It enables researchers and engineers to systematically explore large configuration spaces in order to optimize machine learning models, infrastructure, and products.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "- [PyTorch-BigGraph](https://ai.facebook.com/blog/open-sourcing-pytorch-biggraph-for-faster-embeddings-of-extremely-large-graphs/): PBG is a distributed system for creating embeddings of very large graphs with billions of entities and trillions of edges. It includes support for sharding and negative sampling and it offers sample use cases based on Wikidata embeddings.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "- [Google AI Platform Notebooks](https://cloud.google.com/ai-platform-notebooks/): AI Platform Notebooks is a new, hosted JupyterLab service from Google Cloud Platform. Data scientists can quickly create virtual machines running JupyterLab with the latest version of PyTorch preinstalled. It is also tightly integrated with GCP services such as BigQuery, Cloud Dataproc, Cloud Dataflow, and AI Factory, making it easy to execute the full ML cycle without ever leaving JupyterLab.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "We\u2019re also excited to see many interesting new projects from the broader PyTorch community. Highlights include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "- [BigGAN-PyTorch](https://github.com/ajbrock/BigGAN-PyTorch):This is a full PyTorch reimplementation that uses gradient accumulation to provide the benefits of big batches on as few as four GPUs.\n- [GeomLoss](http://www.kernel-operations.io/geomloss/index.html): A Python API that defines PyTorch layers for geometric loss functions between sampled measures, images, and volumes. It includes MMD, Wasserstein, Sinkhorn, and more.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "
\n

\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "- [PyTorch Geometric](https://github.com/rusty1s/pytorch_geometric): A deep learning extension library for PyTorch that offers several methods for deep learning on graphs and other irregular structures (also known as [geometric deep learning](http://geometricdeeplearning.com)) from a variety of published papers.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "- [Curve-GCN](https://github.com/fidler-lab/curve-gcn): A real-time, interactive image annotation approach that uses an end-to-end-trained graph convolutional network (GCN). It supports object annotation by either polygons or splines, facilitating labeling efficiency for both line-based and curved objects. Curve-GCN runs 10x faster than traditional methods, such as Polygon-RNN++.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "Udacity, fast.ai, and others develop new PyTorch resources\n\nPyTorch is ideal for teaching ML development because it enables rapid experimentation through its flexible, dynamic programming environment and user-friendly Pythonic interface. In addition, Google Colab now offers an interactive Jupyter Notebook environment that natively supports PyTorch, allowing developers to run any PyTorch tutorial immediately with free CPU and GPU resources.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "University-level classes \u2014 including [Stanford NLP](http://web.stanford.edu/class/cs224n), [UC Berkeley](https://inst.eecs.berkeley.edu/~cs280/sp18/) Computer Vision, and [Caltech](http://cast.caltech.edu) Robotics courses \u2014 are now being taught on PyTorch. In addition, massive open online courses (MOOCs) are training thousands of new PyTorch developers.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "Today, we\u2019re announcing a [new Udacity course](https://blog.udacity.com/2019/05/announcing-the-secure-and-private-ai-scholarship-challenge-with-facebook.html), building upon the Intro to Deep Learning course launched last year. This new course, led by Andrew Trask of Oxford University and OpenMined, covers important concepts around privacy in AI, including methods such as differential privacy and federated learning. Facebook will also be providing scholarships to support students as they continue their ML", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "education in Udacity\u2019s full Nanodegree programs.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "The [fast.ai](https://www.fast.ai) community is also continuing to invest energy and resources in PyTorch. In June, fast.ai will launch a new course called Deep Learning from the Foundations, which will show developers how to go all the way from writing matrix multiplication from scratch to how to train and implement a state-of-the-art ImageNet model. The course will include deep dives into the underlying implementation of methods in the PyTorch and fast.ai libraries, and will use the code to explain and", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "illustrate the academic papers that underlie these methods.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "As part of the course, fast.ai will also release new software modules, including fastai.audio, which brings the power of fast.ai\u2019s deep abstractions and curated algorithms to the new PyTorch.audio module, and show how fastai.vision can be used to [create stunning high-resolution videos](https://www.fast.ai/2019/05/03/decrappify) from material such as old classic movies, and from cutting-edge microscopy sequences through a collaboration with the [Salk Institute](https://www.salk.edu). In addition, fast.ai is", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "contributing its new X-ResNet module, including a suite of models pretrained on ImageNet.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "Getting started with PyTorch\n\nEveryone in the AI community \u2014 including those new to ML development as well as researchers and engineers looking for ways to accelerate their end-to-end workflows \u2014 can experiment with PyTorch instantly by visiting [pytorch.org](https://pytorch.org) and launching a [tutorial](https://pytorch.org/tutorials) in Colab. There are also many easy ways to [get started](https://pytorch.org/get-started/locally) both locally and on popular cloud platforms.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"Introducing Accelerated PyTorch Training on Mac\"\nauthor: PyTorch\nfeatured-img: \"/assets/images/METAPT-002-BarGraph-02-static.png\"\n---", "metadata": {"source": "https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/", "category": "pytorch blogs"}}
-{"page_content": "In collaboration with the Metal engineering team at Apple, we are excited to announce support for GPU-accelerated PyTorch training on Mac. Until now, PyTorch training on Mac only leveraged the CPU, but with the upcoming PyTorch v1.12 release, developers and researchers can take advantage of Apple silicon GPUs for significantly faster model training. This unlocks the ability to perform machine learning workflows like prototyping and fine-tuning locally, right on Mac.", "metadata": {"source": "https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
", "metadata": {"source": "https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/", "category": "pytorch blogs"}}
-{"page_content": "Metal Acceleration\n\nAccelerated GPU training is enabled using Apple\u2019s Metal Performance Shaders (MPS) as a backend for PyTorch. The MPS backend extends the PyTorch framework, providing scripts and capabilities to set up and run operations on Mac. MPS optimizes compute performance with kernels that are fine-tuned for the unique characteristics of each Metal GPU family. The new device maps machine learning computational graphs and primitives on the MPS Graph framework and tuned kernels provided by MPS.", "metadata": {"source": "https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/", "category": "pytorch blogs"}}
-{"page_content": "Training Benefits on Apple Silicon\n\nEvery Apple silicon Mac has a unified memory architecture, providing the GPU with direct access to the full memory store. This makes Mac a great platform for machine learning, enabling users to train larger networks or batch sizes locally. This reduces costs associated with cloud-based development or the need for additional local GPUs. The Unified Memory architecture also reduces data retrieval latency, improving end-to-end performance.", "metadata": {"source": "https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/", "category": "pytorch blogs"}}
-{"page_content": "In the graphs below, you can see the performance speedup from accelerated GPU training and evaluation compared to the CPU baseline:\n\n\n
\n
\n\n\nAccelerated GPU training and evaluation speedups over CPU-only (times faster)\n
", "metadata": {"source": "https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/", "category": "pytorch blogs"}}
-{"page_content": "Getting Started\n\nTo get started, just install the latest [Preview (Nightly) build](https://pytorch.org/get-started/locally/) on your Apple silicon Mac running macOS 12.3 or later with a native version (arm64) of Python.\n \nYou can also learn more about Metal and MPS on [Apple\u2019s Metal page](https://developer.apple.com/metal/).", "metadata": {"source": "https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/", "category": "pytorch blogs"}}
+{"page_content": "- [TensorBoard](https://www.tensorflow.org/tensorboard): First-class and native support for visualization and model debugging with TensorBoard, a web application suite for inspecting and understanding training runs and graphs. PyTorch now natively supports TensorBoard with a simple \u201cfrom torch.utils.tensorboard import SummaryWriter\u201d command.\n- JIT compiler: Improvements to just-in-time (JIT) compilation. These include various bug fixes as well as expanded capabilities in TorchScript, such as support for dictionaries, user classes, and attributes.\n- New APIs: Support for Boolean tensors and better support for custom recurrent neural networks.\n- Distributed Training: Improved performance for common models such as CNNs, added support for multi device modules including the ability to split models across GPUs while still using Distributed Data Parallel (DDP) and support for modules where not all parameters are used in every iteration (e.g. control flow, like adaptive softmax, etc). See the latest tutorials [here](https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
+{"page_content": "We\u2019ve also continued to partner with the community to foster projects and tools aimed at supporting ML engineers for needs ranging from improved model understanding to auto-tuning using AutoML methods. With the release of Ax and BoTorch (below), we will be sharing some of our core algorithms, including meta-learning for efficiently optimizing hyperparameters from based on historical tasks. We are excited to see this work open-sourced for the community to build on.\n\nThis ecosystem includes open source projects and tools that have been deployed at production scale, as well as products and services from our partnership with industry leaders who share our vision of an open and collaborative AI community. Here are a few of the latest tools:", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
+{"page_content": "- [BoTorch](https://ai.facebook.com/blog/open-sourcing-ax-and-botorch-new-ai-tools-for-adaptive-experimentation/): BoTorch is a research framework built on top of PyTorch to provide Bayesian optimization, a sample-efficient technique for sequential optimization of costly-to-evaluate black-box functions.\n- [Ax](https://ai.facebook.com/blog/open-sourcing-ax-and-botorch-new-ai-tools-for-adaptive-experimentation/): Ax is an ML platform for managing adaptive experiments. It enables researchers and engineers to systematically explore large configuration spaces in order to optimize machine learning models, infrastructure, and products.\n- [PyTorch-BigGraph](https://ai.facebook.com/blog/open-sourcing-pytorch-biggraph-for-faster-embeddings-of-extremely-large-graphs/): PBG is a distributed system for creating embeddings of very large graphs with billions of entities and trillions of edges. It includes support for sharding and negative sampling and it offers sample use cases based on Wikidata embeddings.\n- [Google AI Platform Notebooks](https://cloud.google.com/ai-platform-notebooks/): AI Platform Notebooks is a new, hosted JupyterLab service from Google Cloud Platform. Data scientists can quickly create virtual machines running JupyterLab with the latest version of PyTorch preinstalled. It is also tightly integrated with GCP services such as BigQuery, Cloud Dataproc, Cloud Dataflow, and AI Factory, making it easy to execute the full ML cycle without ever leaving JupyterLab.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
+{"page_content": "We\u2019re also excited to see many interesting new projects from the broader PyTorch community. Highlights include:\n\n- [BigGAN-PyTorch](https://github.com/ajbrock/BigGAN-PyTorch):This is a full PyTorch reimplementation that uses gradient accumulation to provide the benefits of big batches on as few as four GPUs.\n- [GeomLoss](http://www.kernel-operations.io/geomloss/index.html): A Python API that defines PyTorch layers for geometric loss functions between sampled measures, images, and volumes. It includes MMD, Wasserstein, Sinkhorn, and more.\n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
+{"page_content": "- [PyTorch Geometric](https://github.com/rusty1s/pytorch_geometric): A deep learning extension library for PyTorch that offers several methods for deep learning on graphs and other irregular structures (also known as [geometric deep learning](http://geometricdeeplearning.com)) from a variety of published papers.\n- [Curve-GCN](https://github.com/fidler-lab/curve-gcn): A real-time, interactive image annotation approach that uses an end-to-end-trained graph convolutional network (GCN). It supports object annotation by either polygons or splines, facilitating labeling efficiency for both line-based and curved objects. Curve-GCN runs 10x faster than traditional methods, such as Polygon-RNN++.\n\n## Udacity, fast.ai, and others develop new PyTorch resources", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
+{"page_content": "PyTorch is ideal for teaching ML development because it enables rapid experimentation through its flexible, dynamic programming environment and user-friendly Pythonic interface. In addition, Google Colab now offers an interactive Jupyter Notebook environment that natively supports PyTorch, allowing developers to run any PyTorch tutorial immediately with free CPU and GPU resources.\n\nUniversity-level classes \u2014 including [Stanford NLP](http://web.stanford.edu/class/cs224n), [UC Berkeley](https://inst.eecs.berkeley.edu/~cs280/sp18/) Computer Vision, and [Caltech](http://cast.caltech.edu) Robotics courses \u2014 are now being taught on PyTorch. In addition, massive open online courses (MOOCs) are training thousands of new PyTorch developers.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
+{"page_content": "Today, we\u2019re announcing a [new Udacity course](https://blog.udacity.com/2019/05/announcing-the-secure-and-private-ai-scholarship-challenge-with-facebook.html), building upon the Intro to Deep Learning course launched last year. This new course, led by Andrew Trask of Oxford University and OpenMined, covers important concepts around privacy in AI, including methods such as differential privacy and federated learning. Facebook will also be providing scholarships to support students as they continue their ML education in Udacity\u2019s full Nanodegree programs.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
+{"page_content": "The [fast.ai](https://www.fast.ai) community is also continuing to invest energy and resources in PyTorch. In June, fast.ai will launch a new course called Deep Learning from the Foundations, which will show developers how to go all the way from writing matrix multiplication from scratch to how to train and implement a state-of-the-art ImageNet model. The course will include deep dives into the underlying implementation of methods in the PyTorch and fast.ai libraries, and will use the code to explain and illustrate the academic papers that underlie these methods.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
+{"page_content": "As part of the course, fast.ai will also release new software modules, including fastai.audio, which brings the power of fast.ai\u2019s deep abstractions and curated algorithms to the new PyTorch.audio module, and show how fastai.vision can be used to [create stunning high-resolution videos](https://www.fast.ai/2019/05/03/decrappify) from material such as old classic movies, and from cutting-edge microscopy sequences through a collaboration with the [Salk Institute](https://www.salk.edu). In addition, fast.ai is contributing its new X-ResNet module, including a suite of models pretrained on ImageNet.\n\n## Getting started with PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
+{"page_content": "Everyone in the AI community \u2014 including those new to ML development as well as researchers and engineers looking for ways to accelerate their end-to-end workflows \u2014 can experiment with PyTorch instantly by visiting [pytorch.org](https://pytorch.org) and launching a [tutorial](https://pytorch.org/tutorials) in Colab. There are also many easy ways to [get started](https://pytorch.org/get-started/locally) both locally and on popular cloud platforms.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"Introducing Accelerated PyTorch Training on Mac\"\nauthor: PyTorch\nfeatured-img: \"/assets/images/METAPT-002-BarGraph-02-static.png\"\n---\n\nIn collaboration with the Metal engineering team at Apple, we are excited to announce support for GPU-accelerated PyTorch training on Mac. Until now, PyTorch training on Mac only leveraged the CPU, but with the upcoming PyTorch v1.12 release, developers and researchers can take advantage of Apple silicon GPUs for significantly faster model training. This unlocks the ability to perform machine learning workflows like prototyping and fine-tuning locally, right on Mac.\n\n\n
\n
\n\n## Metal Acceleration", "metadata": {"source": "https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/", "category": "pytorch blogs"}}
+{"page_content": "Accelerated GPU training is enabled using Apple\u2019s Metal Performance Shaders (MPS) as a backend for PyTorch. The MPS backend extends the PyTorch framework, providing scripts and capabilities to set up and run operations on Mac. MPS optimizes compute performance with kernels that are fine-tuned for the unique characteristics of each Metal GPU family. The new device maps machine learning computational graphs and primitives on the MPS Graph framework and tuned kernels provided by MPS. \n\n## Training Benefits on Apple Silicon\n\nEvery Apple silicon Mac has a unified memory architecture, providing the GPU with direct access to the full memory store. This makes Mac a great platform for machine learning, enabling users to train larger networks or batch sizes locally. This reduces costs associated with cloud-based development or the need for additional local GPUs. The Unified Memory architecture also reduces data retrieval latency, improving end-to-end performance.", "metadata": {"source": "https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/", "category": "pytorch blogs"}}
+{"page_content": "In the graphs below, you can see the performance speedup from accelerated GPU training and evaluation compared to the CPU baseline:\n\n\n
\n
\n\n\nAccelerated GPU training and evaluation speedups over CPU-only (times faster)\n
\n\n\n## Getting Started\n\nTo get started, just install the latest [Preview (Nightly) build](https://pytorch.org/get-started/locally/) on your Apple silicon Mac running macOS 12.3 or later with a native version (arm64) of Python.\n \nYou can also learn more about Metal and MPS on [Apple\u2019s Metal page](https://developer.apple.com/metal/).", "metadata": {"source": "https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/", "category": "pytorch blogs"}}
{"page_content": "\\* _Testing conducted by Apple in April 2022 using production Mac Studio systems with Apple M1 Ultra, 20-core CPU, 64-core GPU 128GB of RAM, and 2TB SSD. Tested with macOS Monterey 12.3, prerelease PyTorch 1.12, ResNet50 (batch size=128), HuggingFace BERT (batch size=64), and VGG16 (batch size=64). Performance tests are conducted using specific computer systems and reflect the approximate performance of Mac Studio._", "metadata": {"source": "https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"Accelerating Hugging Face and TIMM models with PyTorch 2.0\"\nauthor: Mark Saroufim\nfeatured-img: \"assets/images/pytorch-2.0-feature-img.png\"\n---", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "`torch.compile()` makes it easy to experiment with different compiler backends to make PyTorch code faster with a single line decorator `torch.compile()`. It works either directly over an nn.Module as a drop-in replacement for `torch.jit.script()` but without requiring you to make any source code changes. We expect this one line code change to provide you with between 30%-2x training time speedups on the vast majority of models that you\u2019re already running.\n\n```python\n\nopt_module = torch.compile(module)", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "torch.compile supports arbitrary PyTorch code, control flow, mutation and comes with experimental support for dynamic shapes. We\u2019re so excited about this development that we call it PyTorch 2.0.\n\nWhat makes this announcement different for us is we\u2019ve already benchmarked some of the most popular open source PyTorch models and gotten substantial speedups ranging from 30% to 2x [https://github.com/pytorch/torchdynamo/issues/681](https://github.com/pytorch/torchdynamo/issues/681).", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "There are no tricks here, we\u2019ve pip installed popular libraries like [https://github.com/huggingface/transformers](https://github.com/huggingface/transformers), [https://github.com/huggingface/accelerate](https://github.com/huggingface/accelerate) and [https://github.com/rwightman/pytorch-image-models](https://github.com/rwightman/pytorch-image-models) and then ran torch.compile() on them and that\u2019s it.", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "It\u2019s rare to get both performance and convenience, but this is why the core team finds PyTorch 2.0 so exciting. The Hugging Face team is also excited, in their words:\n\nRoss Wightman the primary maintainer of TIMM: \u201cPT 2.0 works out of the box with majority of timm models for inference and train workloads and no code changes\u201d", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "Sylvain Gugger the primary maintainer of transformers and accelerate: \"With just one line of code to add, PyTorch 2.0 gives a speedup between 1.5x and 2.x in training Transformers models. This is the most exciting thing since mixed precision training was introduced!\"\n\nThis tutorial will show you exactly how to replicate those speedups so you can be as excited as to PyTorch 2.0 as we are.", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "Requirements and Setup\n\nFor GPU (newer generation GPUs will see drastically better performance)\n\n```\npip3 install numpy --pre torch --force-reinstall --extra-index-url https://download.pytorch.org/whl/nightly/cu117\n\n```\n\nFor CPU\n\n```\npip3 install --pre torch --extra-index-url https://download.pytorch.org/whl/nightly/cpu\n\n```\n\nOptional: Verify Installation\n\n```\ngit clone https://github.com/pytorch/pytorch\ncd tools/dynamo\npython verify_dynamo.py", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "Optional: Docker installation\n\nWe also provide all the required dependencies in the PyTorch nightly\nbinaries which you can download with\n\n```\ndocker pull ghcr.io/pytorch/pytorch-nightly\n\n```\n\nAnd for ad hoc experiments just make sure that your container has access\nto all your GPUs\n\n```\ndocker run --gpus all -it ghcr.io/pytorch/pytorch-nightly:latest /bin/bash\n\n```", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "Getting started", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "a toy exmaple\n\nLet\u2019s start with a simple example and make things more complicated step\nby step. Please note that you\u2019re likely to see more significant speedups the newer your GPU is.\n\n```python\nimport torch\ndef fn(x, y):\n a = torch.sin(x).cuda()\n b = torch.sin(y).cuda()\n return a + b\nnew_fn = torch.compile(fn, backend=\"inductor\")\ninput_tensor = torch.randn(10000).to(device=\"cuda:0\")\na = new_fn(input_tensor, input_tensor)", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "This example won\u2019t actually run faster but it\u2019s educational.\n\nexample that features `torch.cos()` and `torch.sin()` which are examples of pointwise ops as in they operate element by element on a vector. A more famous pointwise op you might actually want to use would be something like `torch.relu()`.\n\nPointwise ops in eager mode are suboptimal because each one would need to read a tensor from memory, make some changes and then write back those changes.", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "The single most important optimization that PyTorch 2.0 does for you is fusion.\n\nSo back to our example we can turn 2 reads and 2 writes into 1 read and 1 write which is crucial especially for newer GPUs where the bottleneck is memory bandwidth (how quickly you can send data to a GPU) instead of compute (how quickly your GPU can crunch floating point operations)\n\nThe second most important optimization that PyTorch 2.0 does for you is CUDA graphs", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "CUDA graphs help eliminate the overhead from launching individual kernels from a python program.\n\ntorch.compile() supports many different backends but one that we\u2019re particularly excited about is Inductor which generates Triton kernels [https://github.com/openai/triton](https://github.com/openai/triton) which are written in Python yet outperform the vast majority of handwritten CUDA kernels. Suppose our example above was called trig.py we can actually inspect the code generated triton kernels by running.", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "```\nTORCH_COMPILE_DEBUG=1 python trig.py", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "```python", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "@pointwise(size_hints=[16384], filename=__file__, meta={'signature': {0: '*fp32', 1: '*fp32', 2: 'i32'}, 'device': 0, 'constants': {}, 'configs': [instance_descriptor(divisible_by_16=(0, 1, 2), equal_to_1=())]})\n@triton.jit\ndef kernel(in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):\n xnumel = 10000\n xoffset = tl.program_id(0) * XBLOCK\n xindex = xoffset + tl.reshape(tl.arange(0, XBLOCK), [XBLOCK])\n xmask = xindex < xnumel\n x0 = xindex\n tmp0 = tl.load(in_ptr0 + (x0), xmask)", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "tmp1 = tl.sin(tmp0)\n tmp2 = tl.sin(tmp1)\n tl.store(out_ptr0 + (x0 + tl.zeros([XBLOCK], tl.int32)), tmp2, xmask)", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "And you can verify that fusing the two `sins` did actually occur because the two `sin` operations occur within a single Triton kernel and the temporary variables are held in registers with very fast access.", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "a real model\n\nAs a next step let\u2019s try a real model like resnet50 from the PyTorch hub.\n\n```python\nimport torch\nmodel = torch.hub.load('pytorch/vision:v0.10.0', 'resnet18', pretrained=True)\nopt_model = torch.compile(model, backend=\"inductor\")\nmodel(torch.randn(1,3,64,64))", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "If you actually run you may be surprised that the first run is slow and that\u2019s because the model is being compiled. Subsequent runs will be faster so it's common practice to warm up your model before you start benchmarking it.", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "You may have noticed how we also passed in the name of a compiler explicitly here with \u201cinductor\u201d but it\u2019s not the only available backend, you can run in a REPL `torch._dynamo.list_backends()` to see the full list of available backends. For fun you should try out `aot_cudagraphs` or `nvfuser`.", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "Hugging Face models\n\nLet\u2019s do something a bit more interesting now, our community frequently\nuses pretrained models from transformers [https://github.com/huggingface/transformers](https://github.com/huggingface/transformers) or TIMM [https://github.com/rwightman/pytorch-image-models](https://github.com/rwightman/pytorch-image-models) and one of our design goals for PyTorch 2.0 was that any new compiler stack needs to work out of the box with the vast majority of models people actually run.", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "So we\u2019re going to directly download a pretrained model from the Hugging Face hub and optimize it\n\n```python", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "import torch\nfrom transformers import BertTokenizer, BertModel\n# Copy pasted from here https://huggingface.co/bert-base-uncased\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\nmodel = BertModel.from_pretrained(\"bert-base-uncased\").to(device=\"cuda:0\")\nmodel = torch.compile(model) # This is the only line of code that we changed\ntext = \"Replace me by any text you'd like.\"\nencoded_input = tokenizer(text, return_tensors='pt').to(device=\"cuda:0\")\noutput = model(**encoded_input)", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "If you remove the `to(device=\"cuda:0\")` from the model and `encoded_input` then PyTorch 2.0 will generate C++ kernels that will be optimized for running on your CPU. You can inspect both Triton or C++ kernels for BERT, they\u2019re obviously more complex than the trigonometry example we had above but you can similarly skim it and understand if you understand PyTorch.\n\nThe same code also works just fine if used with [https://github.com/huggingface/accelerate](https://github.com/huggingface/accelerate) and DDP", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "Similarly let\u2019s try out a TIMM example\n\n```python\nimport timm\nimport torch\nmodel = timm.create_model('resnext101_32x8d', pretrained=True, num_classes=2)\nopt_model = torch.compile(model, backend=\"inductor\")\nopt_model(torch.randn(64,3,7,7))", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "Our goal with PyTorch was to build a breadth-first compiler that would speed up the vast majority of actual models people run in open source. The Hugging Face Hub ended up being an extremely valuable benchmarking tool for us, ensuring that any optimization we work on actually helps accelerate models people want to run.", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "So please try out PyTorch 2.0, enjoy the free perf and if you\u2019re not seeing it then please open an issue and we will make sure your model is supported [https://github.com/pytorch/torchdynamo/issues](https://github.com/pytorch/torchdynamo/issues)\n\nAfter all, we can\u2019t claim we\u2019re created a breadth-first unless YOUR models actually run faster.", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch 1.6 now includes Stochastic Weight Averaging'\nauthor: Pavel Izmailov, Andrew Gordon Wilson and Vincent Queneneville-Belair\n---", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "Do you use stochastic gradient descent (SGD) or Adam? Regardless of the procedure you use to train your neural network, you can likely achieve significantly better generalization at virtually no additional cost with a simple new technique now natively supported in PyTorch 1.6, Stochastic Weight Averaging (SWA) [1]. Even if you have already trained your model, it\u2019s easy to realize the benefits of SWA by running SWA for a small number of epochs starting with a pre-trained model.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "[Again](https://twitter.com/MilesCranmer/status/1282140440892932096) and [again](https://twitter.com/leopd/status/1285969855062192129), researchers are discovering that SWA improves the performance of well-tuned models in a wide array of practical applications with little cost or effort!", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "SWA has a wide range of applications and features:\n* SWA significantly improves performance compared to standard training techniques in computer vision (e.g., VGG, ResNets, Wide ResNets and DenseNets on ImageNet and CIFAR benchmarks [1, 2]).\n* SWA provides state-of-the-art performance on key benchmarks in semi-supervised learning and domain adaptation [2].", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "* SWA was shown to improve performance in language modeling (e.g., AWD-LSTM on WikiText-2 [4]) and policy-gradient methods in deep reinforcement learning [3].", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "* SWAG, an extension of SWA, can approximate Bayesian model averaging in Bayesian deep learning and achieves state-of-the-art uncertainty calibration results in various settings. Moreover, its recent generalization MultiSWAG provides significant additional performance gains and mitigates double-descent [4, 10]. Another approach, Subspace Inference, approximates the Bayesian posterior in a small subspace of the parameter space around the SWA solution [5].", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "* SWA for low precision training, SWALP, can match the performance of full-precision SGD training, even with all numbers quantized down to 8 bits, including gradient accumulators [6].\n* SWA in parallel, SWAP, was shown to greatly speed up the training of neural networks by using large batch sizes and, in particular, set a record by training a neural network to 94% accuracy on CIFAR-10 in 27 seconds [11].", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "**Figure 1**. *Illustrations of SWA and SGD with a Preactivation ResNet-164 on CIFAR-100 [1]. **Left**: test error surface for three FGE samples and the corresponding SWA solution (averaging in weight space). **Middle** and **Right**: test error and train loss surfaces showing the weights proposed by SGD (at convergence) and SWA, starting from the same initialization of SGD after 125 training epochs. Please see [1] for details on how these figures were constructed*.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "In short, SWA performs an equal average of the weights traversed by SGD (or any stochastic optimizer) with a modified learning rate schedule (see the left panel of Figure 1.). SWA solutions end up in the center of a wide flat region of loss, while SGD tends to converge to the boundary of the low-loss region, making it susceptible to the shift between train and test error surfaces (see the middle and right panels of Figure 1). We emphasize that SWA **can be used with any optimizer, such as Adam, and is not", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "specific to SGD**.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "Previously, SWA was in PyTorch contrib. In PyTorch 1.6, we provide a new convenient implementation of SWA in [torch.optim.swa_utils](https://pytorch.org/docs/stable/optim.html#stochastic-weight-averaging).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "Is this just Averaged SGD?", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "At a high level, averaging SGD iterates dates back several decades in convex optimization [7, 8], where it is sometimes referred to as Polyak-Ruppert averaging, or averaged SGD. **But the details matter**. Averaged SGD is often used in conjunction with a decaying learning rate, and an exponential moving average (EMA), typically for convex optimization. In convex optimization, the focus has been on improved rates of convergence. In deep learning, this form of averaged SGD smooths the trajectory of SGD", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "iterates but does not perform very differently.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "By contrast, SWA uses an **equal average** of SGD iterates with a modified **cyclical or high constant learning rate** and exploits the flatness of training objectives [8] specific to **deep learning** for **improved generalization**.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "How does Stochastic Weight Averaging Work?", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "There are two important ingredients that make SWA work. First, SWA uses a **modified learning rate** schedule so that SGD (or other optimizers such as Adam) continues to bounce around the optimum and explore diverse models instead of simply converging to a single solution. For example, we can use the standard decaying learning rate strategy for the first 75% of training time and then set the learning rate to a reasonably high constant value for the remaining 25% of the time (see Figure 2 below). The second", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "ingredient is to take an average of the weights **(typically an equal average)** of the networks traversed by SGD. For example, we can maintain a running average of the weights obtained at the end of every epoch within the last 25% of training time (see Figure 2). After training is complete, we then set the weights of the network to the computed SWA averages.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
\n\n**Figure 2**. *Illustration of the learning rate schedule adopted by SWA. Standard decaying schedule is used for the first 75% of the training and then a high constant value is used for the remaining 25%. The SWA averages are formed during the last 25% of training*.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "One important detail is the batch normalization. Batch normalization layers compute running statistics of activations during training. Note that the SWA averages of the weights are never used to make predictions during training. So the batch normalization layers do not have the activation statistics computed at the end of training. We can compute these statistics by doing a single forward pass on the train data with the SWA model.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "While we focus on SGD for simplicity in the description above, SWA can be combined with any optimizer. You can also use cyclical learning rates instead of a high constant value (see e.g., [2]).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "How to use SWA in PyTorch?\n\nIn `torch.optim.swa_utils` we implement all the SWA ingredients to make it convenient to use SWA with any model. In particular, we implement `AveragedModel` class for SWA models, `SWALR` learning rate scheduler, and `update_bn` utility function to update SWA batch normalization statistics at the end of training.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "In the example below, `swa_model` is the SWA model that accumulates the averages of the weights. We train the model for a total of 300 epochs, and we switch to the SWA learning rate schedule and start to collect SWA averages of the parameters at epoch 160. \n\n```python\nfrom torch.optim.swa_utils import AveragedModel, SWALR\nfrom torch.optim.lr_scheduler import CosineAnnealingLR", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "loader, optimizer, model, loss_fn = ...\nswa_model = AveragedModel(model)\nscheduler = CosineAnnealingLR(optimizer, T_max=100)\nswa_start = 5\nswa_scheduler = SWALR(optimizer, swa_lr=0.05)\n\nfor epoch in range(100):\n for input, target in loader:\n optimizer.zero_grad()\n loss_fn(model(input), target).backward()\n optimizer.step()\n if epoch > swa_start:\n swa_model.update_parameters(model)\n swa_scheduler.step()\n else:\n scheduler.step()", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "# Update bn statistics for the swa_model at the end\ntorch.optim.swa_utils.update_bn(loader, swa_model)\n# Use swa_model to make predictions on test data \npreds = swa_model(test_input)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "Next, we explain each component of `torch.optim.swa_utils` in detail.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "`AveragedModel` class serves to compute the weights of the SWA model. You can create an averaged model by running `swa_model = AveragedModel(model)`. You can then update the parameters of the averaged model by `swa_model.update_parameters(model)`. By default, `AveragedModel` computes a running equal average of the parameters that you provide, but you can also use custom averaging functions with the `avg_fn` parameter. In the following example, `ema_model` computes an exponential moving average.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "```python\nema_avg = lambda averaged_model_parameter, model_parameter, num_averaged:\\\n0.1 * averaged_model_parameter + 0.9 * model_parameter\nema_model = torch.optim.swa_utils.AveragedModel(model, avg_fn=ema_avg)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "In practice, we find an equal average with the modified learning rate schedule in Figure 2 provides the best performance.\n\n`SWALR` is a learning rate scheduler that anneals the learning rate to a fixed value, and then keeps it constant. For example, the following code creates a scheduler that linearly anneals the learning rate from its initial value to `0.05` in `5` epochs within each parameter group.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "```python\nswa_scheduler = torch.optim.swa_utils.SWALR(optimizer, \nanneal_strategy=\"linear\", anneal_epochs=5, swa_lr=0.05)\n\n```\nWe also implement cosine annealing to a fixed value (`anneal_strategy=\"cos\"`). In practice, we typically switch to `SWALR` at epoch `swa_start` (e.g. after 75% of the training epochs), and simultaneously start to compute the running averages of the weights:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "```python\nscheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=100)\nswa_start = 75\nfor epoch in range(100):\n # \n if i > swa_start:\n swa_model.update_parameters(model)\n swa_scheduler.step()\n else:\n scheduler.step()", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "Finally, `update_bn` is a utility function that computes the batchnorm statistics for the SWA model on a given dataloader `loader`:\n```\ntorch.optim.swa_utils.update_bn(loader, swa_model) \n```\n`update_bn` applies the `swa_model` to every element in the dataloader and computes the activation statistics for each batch normalization layer in the model.\n\nOnce you computed the SWA averages and updated the batch normalization layers, you can apply `swa_model` to make predictions on test data.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "Why does it work?", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "There are large flat regions of the loss surface [9]. In Figure 3 below, we show a visualization of the loss surface in a subspace of the parameter space containing a path connecting two independently trained SGD solutions, such that the loss is similarly low at every point along the path. SGD converges near the boundary of these regions because there isn\u2019t much gradient signal to move inside, as the points in the region all have similarly low values of loss. By increasing the learning rate, SWA spins", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "around this flat region, and then by averaging the iterates, moves towards the center of the flat region.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
\n\n**Figure 3**: *visualization of mode connectivity for ResNet-20 with no skip connections on CIFAR-10 dataset. The visualization is created in collaboration with Javier Ideami [(https://losslandscape.com/)](https://losslandscape.com/). For more details, see this [blogpost](https://izmailovpavel.github.io/curves_blogpost/)*.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "We expect solutions that are centered in the flat region of the loss to generalize better than those near the boundary. Indeed, train and test error surfaces are not perfectly aligned in the weight space. Solutions that are centered in the flat region are not as susceptible to the shifts between train and test error surfaces as those near the boundary. In Figure 4 below, we show the train loss and test error surfaces along the direction connecting the SWA and SGD solutions. As you can see, while the SWA", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "solution has a higher train loss compared to the SGD solution, it is centered in a region of low loss and has a substantially better test error.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
\n\n**Figure 4**. *Train loss and test error along the line connecting the SWA solution (circle) and SGD solution (square). The SWA solution is centered in a wide region of low train loss, while the SGD solution lies near the boundary. Because of the shift between train loss and test error surfaces, the SWA solution leads to much better generalization*.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "What are the results achieved with SWA?\n\nWe release a GitHub [repo](https://github.com/izmailovpavel/torch_swa_examples) with examples using the PyTorch implementation of SWA for training DNNs. For example, these examples can be used to achieve the following results on CIFAR-100:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "{:.table.table-striped.table-bordered}\n | | VGG-16 | ResNet-164 | WideResNet-28x10 | \n| ------------- | ------------- | ------------- | ------------- |\n| SGD | 72.8 \u00b1 0.3 | 78.4 \u00b1 0.3 | 81.0 \u00b1 0.3 | \n| SWA | 74.4 \u00b1 0.3 | 79.8 \u00b1 0.4 | 82.5 \u00b1 0.2 |", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "Semi-Supervised Learning", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "In a follow-up [paper](https://arxiv.org/abs/1806.05594) SWA was applied to semi-supervised learning, where it improved the best reported results in multiple settings [2]. For example, with SWA you can get 95% accuracy on CIFAR-10 if you only have the training labels for 4k training data points (the previous best reported result on this problem was 93.7%). This paper also explores averaging multiple times within epochs, which can accelerate convergence and find still flatter solutions in a given time.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
\n**Figure 5**. Performance of fast-SWA on semi-supervised learning with CIFAR-10. fast-SWA achieves record results in every setting considered.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "Reinforcement Learning\n\nIn another follow-up [paper](http://www.gatsby.ucl.ac.uk/~balaji/udl-camera-ready/UDL-24.pdf) SWA was shown to improve the performance of policy gradient methods A2C and DDPG on several Atari games and MuJoCo environments [3]. This application is also an instance of where SWA is used with Adam. Recall that SWA is not specific to SGD and can benefit essentially any optimizer.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "{:.table.table-striped.table-bordered}\n | Environment Name | A2C | A2C + SWA | \n| ------------- | ------------- | ------------- | \n| Breakout | 522 \u00b1 34 | 703 \u00b1 60 |\n| Qbert | 18777 \u00b1 778 | 21272 \u00b1 655 |\n| SpaceInvaders | 7727 \u00b1 1121 | 21676 \u00b1 8897 |\n| Seaquest | 1779 \u00b1 4 | 1795 \u00b1 4 |\n| BeamRider | 9999 \u00b1 402 | 11321 \u00b1 1065 |\n| CrazyClimber | 147030 \u00b1 10239 | 139752 \u00b1 11618 |", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "Low Precision Training", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "We can filter through quantization noise by combining weights that have been rounded down with weights that have been rounded up. Moreover, by averaging weights to find a flat region of the loss surface, large perturbations of the weights will not affect the quality of the solution (Figures 9 and 10). Recent [work](https://arxiv.org/abs/1904.11943) shows that by adapting SWA to the low precision setting, in a method called SWALP, one can match the performance of full-precision SGD even with all training in", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "8 bits [5]. This is quite a practically important result, given that (1) SGD training in 8 bits performs notably worse than full precision SGD, and (2) low precision training is significantly harder than predictions in low precision after training (the usual setting). For example, a ResNet-164 trained on CIFAR-100 with float (16-bit) SGD achieves 22.2% error, while 8-bit SGD achieves 24.0% error. By contrast, SWALP with 8 bit training achieves 21.8% error.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
\n**Figure 9**. *Quantizing a solution leads to a perturbation of the weights which has a greater effect on the quality of the sharp solution (left) compared to wide solution (right)*. \n\n\n\n

\n
\n**Figure 10**. *The difference between standard low precision training and SWALP*.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "Another [work](https://arxiv.org/abs/2002.00343), SQWA, presents an approach for quantization and fine-tuning of neural networks in low precision [12]. In particular, SQWA achieved state-of-the-art results for DNNs quantized to 2 bits on CIFAR-100 and ImageNet.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "Calibration and Uncertainty Estimates\n\nBy finding a centred solution in the loss, SWA can also improve calibration and uncertainty representation. Indeed, SWA can be viewed as an approximation to an ensemble, resembling a Bayesian model average, but with a single model [1].", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "SWA can be viewed as taking the first moment of SGD iterates with a modified learning rate schedule. We can directly generalize SWA by also taking the second moment of iterates to form a Gaussian approximate posterior over the weights, further characterizing the loss geometry with SGD iterates. This approach,[SWA-Gaussian (SWAG)](https://arxiv.org/abs/1902.02476) is a simple, scalable and convenient approach to uncertainty estimation and calibration in Bayesian deep learning [4]. The SWAG distribution", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "approximates the shape of the true posterior: Figure 6 below shows the SWAG distribution and the posterior log-density for ResNet-20 on CIFAR-10.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "**Figure 6**. *SWAG posterior approximation and the loss surface for a ResNet-20 without skip-connections trained on CIFAR-10 in the subspace formed by the two largest eigenvalues of the SWAG covariance matrix. The shape of SWAG distribution is aligned with the posterior: the peaks of the two distributions coincide, and both distributions are wider in one direction than in the orthogonal direction. Visualization created in collaboration with* [Javier Ideami](https://losslandscape.com/).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "Empirically, SWAG performs on par or better than popular alternatives including MC dropout, KFAC Laplace, and temperature scaling on uncertainty quantification, out-of-distribution detection, calibration and transfer learning in computer vision tasks. Code for SWAG is available [here](https://github.com/wjmaddox/swa_gaussian).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
\n**Figure 7**. *MultiSWAG generalizes SWAG and deep ensembles, to perform Bayesian model averaging over multiple basins of attraction, leading to significantly improved performance. By contrast, as shown here, deep ensembles select different modes, while standard variational inference (VI) marginalizes (model averages) within a single basin*.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "MultiSWAG [9] uses multiple independent SWAG models to form a mixture of Gaussians as an approximate posterior distribution. Different basins of attraction contain highly complementary explanations of the data. Accordingly, marginalizing over these multiple basins provides a significant boost in accuracy and uncertainty representation. MultiSWAG can be viewed as a generalization of deep ensembles, but with performance improvements.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "Indeed, we see in Figure 8 that MultiSWAG entirely mitigates double descent -- more flexible models have monotonically improving performance -- and provides significantly improved generalization over SGD. For example, when the ResNet-18 has layers of width 20, Multi-SWAG achieves under 30% error whereas SGD achieves over 45%, more than a 15% gap!", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
\n**Figure 8**. *SGD, SWAG, and Multi-SWAG on CIFAR-100 for a ResNet-18 with varying widths. We see Multi-SWAG in particular mitigates double descent and provides significant accuracy improvements over SGD*.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "Reference [10] also considers Multi-SWA, which uses multiple independently trained SWA solutions in an ensemble, providing performance improvements over deep ensembles without any additional computational cost. Code for MultiSWA and MultiSWAG is available [here](https://github.com/izmailovpavel/understandingbdl).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "Another [method](https://arxiv.org/abs/1907.07504), Subspace Inference, constructs a low-dimensional subspace around the SWA solution and marginalizes the weights in this subspace to approximate the Bayesian model average [5]. Subspace Inference uses the statistics from the SGD iterates to construct both the SWA solution and the subspace. The method achieves strong performance in terms of prediction accuracy and uncertainty calibration both in classification and regression problems. Code is available", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "[here](https://github.com/wjmaddox/drbayes).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "Try it Out!", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "One of the greatest open questions in deep learning is why SGD manages to find good solutions, given that the training objectives are highly multimodal, and there are many settings of parameters that achieve no training loss but poor generalization. By understanding geometric features such as flatness, which relate to generalization, we can begin to resolve these questions and build optimizers that provide even better generalization, and many other useful features, such as uncertainty representation. We", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "have presented SWA, a simple drop-in replacement for standard optimizers such as SGD and Adam, which can in principle, benefit anyone training a deep neural network. SWA has been demonstrated to have a strong performance in several areas, including computer vision, semi-supervised learning, reinforcement learning, uncertainty representation, calibration, Bayesian model averaging, and low precision training.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "We encourage you to try out SWA! SWA is now as easy as any standard training in PyTorch. And even if you have already trained your model, you can use SWA to significantly improve performance by running it for a small number of epochs from a pre-trained model. \n\n\n[1] Averaging Weights Leads to Wider Optima and Better Generalization; Pavel Izmailov, Dmitry Podoprikhin, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson; Uncertainty in Artificial Intelligence (UAI), 2018.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "[2] There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average; Ben Athiwaratkun, Marc Finzi, Pavel Izmailov, Andrew Gordon Wilson; \nInternational Conference on Learning Representations (ICLR), 2019.\n\n[3] Improving Stability in Deep Reinforcement Learning with Weight Averaging; Evgenii Nikishin, Pavel Izmailov, Ben Athiwaratkun, Dmitrii Podoprikhin, \nTimur Garipov, Pavel Shvechikov, Dmitry Vetrov, Andrew Gordon Wilson; UAI 2018 Workshop: Uncertainty in Deep Learning, 2018.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "[4] A Simple Baseline for Bayesian Uncertainty in Deep Learning\nWesley Maddox, Timur Garipov, Pavel Izmailov, Andrew Gordon Wilson; Neural Information Processing Systems (NeurIPS), 2019.\n\n[5] Subspace Inference for Bayesian Deep Learning\nPavel Izmailov, Wesley Maddox, Polina Kirichenko, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson\nUncertainty in Artificial Intelligence (UAI), 2019.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "[6] SWALP : Stochastic Weight Averaging in Low Precision Training\nGuandao Yang, Tianyi Zhang, Polina Kirichenko, Junwen Bai, \nAndrew Gordon Wilson, Christopher De Sa; International Conference on Machine Learning (ICML), 2019.\n\n[7] David Ruppert. Efficient estimations from a slowly convergent Robbins-Monro process; Technical report, Cornell University Operations Research and Industrial Engineering, 1988.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "[8] Acceleration of stochastic approximation by averaging. Boris T Polyak and Anatoli B Juditsky; SIAM Journal on Control and Optimization, 30(4):838\u2013855, 1992.\n\n[9] Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs\nTimur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov, \nAndrew Gordon Wilson. Neural Information Processing Systems (NeurIPS), 2018.\n\n[10] Bayesian Deep Learning and a Probabilistic Perspective of Generalization\nAndrew Gordon Wilson, Pavel Izmailov. ArXiv preprint, 2020.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "[11] Stochastic Weight Averaging in Parallel: Large-Batch Training That Generalizes Well\nGupta, Vipul, Santiago Akle Serrano, and Dennis DeCoste; International Conference on Learning Representations (ICLR). 2019.\n\n[12] SQWA: Stochastic Quantized Weight Averaging for Improving the Generalization Capability of Low-Precision Deep Neural Networks\nShin, Sungho, Yoonho Boo, and Wonyong Sung; arXiv preprint 2020.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"Introducing TorchRec, and other domain library updates in PyTorch 1.11\"\nauthor: Team PyTorch\nfeatured-img: \"assets/images/pytorch-logo.jpg\"\n---", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "We are introducing the beta release of TorchRec and a number of improvements to the current PyTorch domain libraries, alongside the [PyTorch 1.11 release](https://pytorch.org/blog/pytorch-1.11-released/). These updates demonstrate our focus on developing common and extensible APIs across all domains to make it easier for our community to build ecosystem projects on PyTorch. Highlights include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- **TorchRec**, a PyTorch domain library for Recommendation Systems, is available in beta. [View it on GitHub](https://github.com/pytorch/torchrec).\n- **TorchAudio** - Added Enformer- and RNN-T-based models and recipes to support the full development lifecycle of a streaming ASR model. See the release notes [here](https://github.com/pytorch/audio/releases).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- **TorchText** - Added beta support for RoBERTa and XLM-R models, byte-level BPE tokenizer, and text datasets backed by TorchData. See the release notes [here](https://github.com/pytorch/text/releases).\n- **TorchVision** - Added 4 new model families and 14 new classification datasets such as CLEVR, GTSRB, FER2013. See the release notes [here](https://github.com/pytorch/vision/releases).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "TorchRec 0.1\n\nWe [announced TorchRec](https://pytorch.org/blog/introducing-torchrec/) a few weeks ago and we are excited to release the beta version today. To recap, TorchRec is a PyTorch domain library for Recommendation Systems. This new library provides common sparsity and parallelism primitives, enabling researchers to build state-of-the-art personalization models and deploy them in production. TorchRec was used to train a 1.25 trillion parameter model, pushed to production in January 2022.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "In particular, the library includes:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- Modeling primitives, such as embedding bags and jagged tensors, that enable easy authoring of large, performant multi-device/multi-node models using hybrid data-parallelism and model-parallelism.\n- Optimized RecSys kernels powered by [FBGEMM](https://github.com/pytorch/FBGEMM), including support for sparse and quantized operations.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- A sharder which can partition embedding tables with a variety of different strategies including data-parallel, table-wise, row-wise, table-wise-row-wise, and column-wise sharding.\n- A planner which can automatically generate optimized sharding plans for models.\n- Pipelining to overlap dataloading device transfer (copy to GPU), inter-device communications (input_dist), and computation (forward, backward) for increased performance.\n- GPU inference support.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- Common modules for RecSys, such as models and public datasets (Criteo & Movielens).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Please check the TorchRec announcement post [here](https://pytorch.org/blog/introducing-torchrec/), [video tutorial](https://www.youtube.com/watch?v=cjgj41dvSeQ), install instructions [here](https://github.com/pytorch/torchrec#readme), test drive the feature through this tutorial [here](https://pytorch.org/tutorials/intermediate/torchrec_tutorial.html), and refer to the reference document [here](https://pytorch.org/torchrec/).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "TorchAudio 0.11\n\n#### TorchAudio: Building Blocks for Audio and Speech Processing\n\nWe published a paper, [TorchAudio: Building Blocks for Audio and Speech Processing](https://arxiv.org/abs/2110.15018), describing the overview of the TorchAudio library. If you find TorchAudio useful for your research, please help us share with the community by citing our paper.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) RNN-T & (Prototype) Emformer Models and Recipes\n\n\n
\n
\n\nEmformer is an efficient memory-transformer-based streaming acoustic model that has demonstrated state-of-the-art streaming automatic speech recognition (ASR) performance in low-latency, resource-constrained scenarios, such as on-device applications (citation: [https://arxiv.org/abs/2010.10759](https://arxiv.org/abs/2010.10759)).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "The TorchAudio v0.11 release includes the following beta features:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- Implementation of Emformer ([docs](https://pytorch.org/audio/main/models.html#emformer))\n- Recurrent neural network transducer (RNN-T) streaming ASR model that uses Emformer for its transcription network ([docs](https://pytorch.org/audio/main/models.html#rnn-t))\n- RNN-T beam search decoder with TorchScript support ([docs](https://pytorch.org/audio/main/models.html#rnntbeamsearch))", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- LibriSpeech Emformer RNN-T training recipe ([GitHub](https://github.com/pytorch/audio/tree/release/0.11/examples/asr/librispeech_emformer_rnnt)) and corresponding pre-trained streaming ASR inference pipeline ([docs](https://pytorch.org/audio/main/pipelines.html#emformer-rnnt-base-librispeech))", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Also there are prototype features that are available from nightly builds or the main branch.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- Training recipes trained on MuST-C and TED-LIUM3 datasets. ([GitHub](https://github.com/pytorch/audio/tree/main/examples/asr/emformer_rnnt))\n- Pre-trained pipelines corresponding to the recipes. ([docs](https://pytorch.org/audio/main/prototype.pipelines.html))\n- Tutorial that steps through performing online speech recognition with RNN-T Emformer model. ([docs](https://pytorch.org/audio/main/tutorials/online_asr_tutorial.html))", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Collectively, these features cover the full development lifecycle of a streaming ASR model, from definition through training and inference, and enable users to easily develop their own Emformer- and RNN-T-based models.\n\nSpecial thanks to Yangyang Shi, Jay Mahadeokar, and Gil Keren for their code contributions and guidance.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) HuBERT Pretrain Model", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "The masked prediction training of HuBERT model requires the masked logits, unmasked logits, and feature norm as the outputs. The logits are for cross-entropy losses and the feature norm is for penalty loss. The release adds [HuBERTPretrainModel](https://github.com/pytorch/audio/blob/main/torchaudio/models/wav2vec2/model.py#L120-L205) and corresponding factory functions ([hubert_pretrain_base](https://github.com/pytorch/audio/blob/main/torchaudio/models/wav2vec2/model.py#L964-L1027),", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "[hubert_pretrain_large](https://github.com/pytorch/audio/blob/main/torchaudio/models/wav2vec2/model.py#L1030-L1090), and [hubert_pretrain_xlarge](https://github.com/pytorch/audio/blob/main/torchaudio/models/wav2vec2/model.py#L1093-L1153)) to enable training from scratch.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "(Prototype) CTC Beam Search Decoder\n\nIn recent releases, TorchAudio has added support for ASR models fine-tuned on CTC loss. The addition of an inference time CTC beam search decoder enables running end-to-end ASR evaluation using TorchAudio utils.\n\nThe CTC decoder in TorchAudio supports customizable beam search decoding with lexicon constraint. It also has optional KenLM language model support.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "For more details, please check out the [API tutorial](https://pytorch.org/audio/main/tutorials/asr_inference_with_ctc_decoder_tutorial.html) and [documentation](https://pytorch.org/audio/main/prototype.ctc_decoder.html). This prototype feature is available through nightly builds.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "(Prototype) Streaming API\n\nTorchAudio started as simple audio I/O APIs that supplement PyTorch. With the recent addition of ASR models and training recipes, the project has received requests to support high-level application development.\n\nStreaming API makes it easy to develop and test the model in online inference. It utilizes ffmpeg under the hood, and enables reading media from online services and hardware devices, decoding media in an incremental manner, and applying filters and preprocessing.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Please checkout the [API tutorial](https://pytorch.org/audio/main/tutorials/streaming_api_tutorial.html) and [the documentation](https://pytorch.org/audio/main/prototype.io.html). There are also the [streaming ASR](https://pytorch.org/audio/main/tutorials/online_asr_tutorial.html) tutorial and the [device streaming ASR tutorial](https://pytorch.org/audio/main/tutorials/device_asr.html). This feature is available from nightly releases. Please refer to [pytorch.org](https://pytorch.org/get-started/locally/)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "for how to install nightly builds.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "TorchText 0.12", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) RoBERTa and XLM-R Models\n\nTorchText has added support for pre-trained RoBERTa and XLM-R models. It would allow users to train end-2-end Transformer Encoder based models on standard NLP tasks using TorchText.\n\nMore specifically:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- The models are torchscriptable and hence can be employed for production use-cases.\n- The model APIs let users to easily attach custom task-specific heads with pre-trained encoders.\n- The API also comes equipped with data pre-processing transforms to match the pre-trained weights and model configuration.\n\nWe have added a [tutorial](https://pytorch.org/text/main/tutorials/sst2_classification_non_distributed.html) to demonstrate SST-2 binary text classification task with pre-trained XLM-R base architecture.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "For additional details on model APIs and usage examples, please refer to the [documentation](https://pytorch.org/text/main/models.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) byte-level BPE tokenizer", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "TorchText has added support for a Byte-Level BPE tokenizer, as used in GPT-2. This tokenizer is also used for tokenizing inputs to the pre-trained RoBERTa models described previously. In addition to the RoBERTa vocab, users can also load their own custom BPE vocab to use the tokenizer. Furthermore, the tokenizer is fully torchscriptable and hence can be employed for production use-cases. For additional details on model APIs and usage examples, please refer to the", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "[documentation](https://pytorch.org/text/main/transforms.html#gpt2bpetokenizer).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) Text datasets backed by TorchData\n\nTorchText has modernized its datasets by migrating from older-style Iterable Datasets to [TorchData\u2019s](https://github.com/pytorch/data#readme) DataPipes. TorchData is a library that provides modular/composable primitives, allowing users to load and transform data in performant data pipelines.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "These DataPipes work out-of-the-box with PyTorch DataLoader and would enable new functionalities like auto-sharding. Users can now easily do data manipulation and pre-processing using user-defined functions and transformations in a functional style programming. Datasets backed by DataPipes also enable standard flow-control like batching, collation, shuffling and bucketizing.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Collectively, DataPipes provides a comprehensive experience for data preprocessing and tensorization needs in a pythonic and flexible way for model training. We have added a [tutorial](https://pytorch.org/text/main/tutorials/sst2_classification_non_distributed.html) to demonstrate data-processing pipelining using the modernized dataset for binary text-classification.\n\nYou can learn more about TorchData DataPipe APIs in its [official documentation](https://pytorch.org/data).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "TorchVision 0.12", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "New Models\n\nFour new model families have been released in the latest version along with pre-trained weights for their variants.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "#1 Object Detection\n\n[FCOS](https://arxiv.org/pdf/1904.01355.pdf) is a popular, fully convolutional, anchor-free model for object detection. In this release we include a community-contributed model implementation as well as pre-trained weights. The model was trained on COCO train2017 and can be used as follows:\n\n```python\nimport torch\nfrom torchvision import models\n\nx = [torch.rand(3, 224, 224)]\nfcos = models.detection.fcos_resnet50_fpn(pretrained=True).eval()\npredictions = fcos(x)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "The box AP of the pre-trained model on COCO val2017 is 39.2 (see [#4961](https://github.com/pytorch/vision/pull/4961) for more details).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "We would like to thank [Hu Ye](https://github.com/xiaohu2015) and [Zhiqiang Wang](https://github.com/zhiqwang) for contributing to the model implementation and initial training. This was the first community-contributed model in a long while, and given its success, we decided to use the learnings from this process and create a new [model contribution guidelines](https://github.com/pytorch/vision/blob/main/CONTRIBUTING_MODELS.md).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "#2 Optical Flow support and RAFT model\n\nTorchVision now supports optical flow! Optical Flow models try to predict movement in a video: given two consecutive frames, the model predicts where each pixel of the first frame ends up in the second frame. Check out our [new tutorial on Optical Flow](https://pytorch.org/vision/0.12/auto_examples/plot_optical_flow.html#sphx-glr-auto-examples-plot-optical-flow-py)!", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "We implemented a torchscript-compatible [RAFT](https://arxiv.org/abs/2003.12039) model with pre-trained weights (both normal and \u201csmall\u201d versions), and added support for [training and evaluating](https://github.com/pytorch/vision/tree/main/references/optical_flow) optical flow models. Our training scripts support distributed training across processes and nodes, leading to much faster training time than the original implementation. We also added 5 new [optical flow", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "datasets](https://pytorch.org/vision/0.12/datasets.html#optical-flow): Flying Chairs, Flying Things, Sintel, Kitti, and HD1K.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "#3. Image Classification\n\n[Vision Transformer](https://arxiv.org/abs/2010.11929) (ViT) and [ConvNeXt](https://arxiv.org/abs/2201.03545) are two popular architectures which can be used as image classifiers or as backbones for downstream vision tasks. In this release we include 8 pre-trained weights for their classification variants. The models were trained on ImageNet and can be used as follows:\n\n```python\nimport torch\nfrom torchvision import models", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "x = torch.rand(1, 3, 224, 224)\nvit = models.vit_b_16(pretrained=True).eval()\nconvnext = models.convnext_tiny(pretrained=True).eval()\npredictions1 = vit(x)\npredictions2 = convnext(x)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "The accuracies of the pre-trained models obtained on ImageNet val are seen below:\n\n| **Model** | **Acc@1** | **Acc@5** |\n| -------------- | --------: | --------: |\n| vit_b_16 | 81.072 | 95.318 |\n| vit_b_32 | 75.912 | 92.466 |\n| vit_l_16 | 79.662 | 94.638 |\n| vit_l_32 | 76.972 | 93.07 |\n| convnext_tiny | 82.52 | 96.146 |\n| convnext_small | 83.616 | 96.65 |\n| convnext_base | 84.062 | 96.87 |\n| convnext_large | 84.414 | 96.976 |", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "The above models have been trained using an adjusted version of our new [training recipe](https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/) and this allows us to offer models with accuracies significantly higher than the ones on the original papers.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "#4. GPU Video Decoding\n\nIn this release, we add support for GPU video decoding in the video reading API. To use hardware-accelerated decoding, we just need to pass a cuda device to the video reading API as shown below:\n\n```python\nimport torchvision\n\nreader = torchvision.io.VideoReader(file_name, device=\"cuda:0\")\nfor frame in reader:\n print(frame)\n```\n\nWe also support seeking to anyframe or a keyframe in the video before reading, as shown below:\n\n```python\nreader.seek(seek_time)\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "New Datasets\n\nWe have implemented 14 new [classification datasets](https://pytorch.org/vision/0.12/datasets.html#image-classification): CLEVR, GTSRB, FER2013, SUN397, Country211, Flowers102, fvgc_aircraft, OxfordIIITPet, DTD, Food 101, Rendered SST2, Stanford cars, PCAM, and EuroSAT.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "As part of our work on Optical Flow support (see above for more details), we also added 5 new [optical flow datasets](https://pytorch.org/vision/0.12/datasets.html#optical-flow): Flying Chairs, Flying Things, Sintel, Kitti, and HD1K.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Other Updates", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- **New documentation layout**: Each function / class is now documented in a separate page, clearing up some space in the per-module pages, and easing the discovery of the proposed APIs. Compare e.g. our [previous docs](https://pytorch.org/vision/0.11/transforms.html) vs the [new ones](https://pytorch.org/vision/0.12/transforms.html). Please let us know if you have any [feedback](https://github.com/pytorch/vision/issues/5511)!", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- **New [model contribution guidelines](https://github.com/pytorch/vision/blob/main/CONTRIBUTING_MODELS.md)** have been published following the success of the [FCOS](https://github.com/pytorch/vision/pull/4961) model which was contributed by the community. These guidelines aim to be an overview of the model contribution process for anyone who would like to suggest, implement and train a new model.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- **Upcoming Prototype API** - We are currently working on a prototype API which adds Multi-weight support on all of our model builder methods. This will enable us to offer multiple pre-trained weights, associated with their meta-data and inference transforms. The API is still under review and thus was not included in the release but you can read more about it on our [blogpost](https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/) and provide your feedback on the dedicated [Github", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "issue](https://github.com/pytorch/vision/issues/5088).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- **Changes in our deprecation policy** - Up until now, torchvision would almost never remove deprecated APIs. In order to be more aligned and consistent with pytorch core, we are updating our deprecation policy. We are now following a 2-release deprecation cycle: deprecated APIs will raise a warning for 2 versions, and will be removed after that. To reflect these changes and to smooth the transition, we have decided to:\n - Remove all APIs that had been deprecated before or on v0.8, released 1.5 years ago.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- Update the removal timeline of all other deprecated APIs to v0.14, to reflect the new 2-cycle policy starting now in v0.12.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Captum 0.5\n\n[Captum](https://captum.ai/) is a PyTorch library for model interpretability. For this release, we expanded Captum with influential instances and added support for both similarity based influences and novel algorithms, [TracIn](https://arxiv.org/abs/2002.08484) and its variants. TracIn variants offer faster approximation of influence scores based on random projections for fully connected layers.\n\nMore specifically the new, influence, subsection of Captum includes:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- **[SimilarityInfluence](https://captum.ai/api/influence.html#similarityinfluence)** computes similarity scores between test and training examples using default (cosine or euclidean) or custom user definite metrics w.r.t. given input model layers.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- **[TracInCP](https://captum.ai/api/influence.html#tracincp)** approximates the influential score of each training example on a given test example based on the dot-product similarity between loss gradients w.r.t. model parameters for test and training examples. Note that if we use training examples as test examples then we compute self influence. This method and its variants described below also return top-k proponents and opponents which are the top-k largest positive and negative influential examples", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "respectively.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- **[TracInCPFast](https://captum.ai/api/influence.html#tracincpfast)** is an approximation of TracInCP that avoids computing the gradients w.r.t. large parameter matrices. It approximates influence score based on the dot products between last fully connected layer activations and loss gradients w.r.t. that layer for training and test examples.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- **[TracInCPFastRandProj](https://captum.ai/api/influence.html#tracincpfastrandproj)** uses a nearest neighbor approximation library such as annoy to compute the dot product between the training and test quantities. In order to reduce the dimensionality of layer activations and corresponding gradients this method, in addition, allows to project those vectors into a lower dimensional space using random projection matrices.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "More about the implementation of influential instances can be found on our [GitHub](https://github.com/pytorch/captum/tree/master/captum/influence) page and [tutorials](https://captum.ai/tutorials/TracInCP_Tutorial).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Thanks for reading, If you\u2019re interested in these updates and want to join the PyTorch community, we encourage you to join the [discussion forums](https://discuss.pytorch.org/) and [open GitHub issues](https://github.com/pytorch/pytorch/issues). To get the latest news from PyTorch, follow us on [Twitter](https://twitter.com/PyTorch), [Medium](https://medium.com/pytorch), [YouTube](https://www.youtube.com/pytorch), and [LinkedIn](https://www.linkedin.com/company/pytorch).\n\nCheers!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "\n \n
\n", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"Accelerating Hugging Face and TIMM models with PyTorch 2.0\"\nauthor: Mark Saroufim\nfeatured-img: \"assets/images/pytorch-2.0-feature-img.png\"\n---\n\n`torch.compile()` makes it easy to experiment with different compiler backends to make PyTorch code faster with a single line decorator `torch.compile()`. It works either directly over an nn.Module as a drop-in replacement for `torch.jit.script()` but without requiring you to make any source code changes. We expect this one line code change to provide you with between 30%-2x training time speedups on the vast majority of models that you\u2019re already running.\n\n```python\n\nopt_module = torch.compile(module)\n\n```\n\ntorch.compile supports arbitrary PyTorch code, control flow, mutation and comes with experimental support for dynamic shapes. We\u2019re so excited about this development that we call it PyTorch 2.0.", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
+{"page_content": "What makes this announcement different for us is we\u2019ve already benchmarked some of the most popular open source PyTorch models and gotten substantial speedups ranging from 30% to 2x [https://github.com/pytorch/torchdynamo/issues/681](https://github.com/pytorch/torchdynamo/issues/681).\n\nThere are no tricks here, we\u2019ve pip installed popular libraries like [https://github.com/huggingface/transformers](https://github.com/huggingface/transformers), [https://github.com/huggingface/accelerate](https://github.com/huggingface/accelerate) and [https://github.com/rwightman/pytorch-image-models](https://github.com/rwightman/pytorch-image-models) and then ran torch.compile() on them and that\u2019s it.\n\nIt\u2019s rare to get both performance and convenience, but this is why the core team finds PyTorch 2.0 so exciting. The Hugging Face team is also excited, in their words:\n\nRoss Wightman the primary maintainer of TIMM: \u201cPT 2.0 works out of the box with majority of timm models for inference and train workloads and no code changes\u201d", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
+{"page_content": "Sylvain Gugger the primary maintainer of transformers and accelerate: \"With just one line of code to add, PyTorch 2.0 gives a speedup between 1.5x and 2.x in training Transformers models. This is the most exciting thing since mixed precision training was introduced!\"\n\nThis tutorial will show you exactly how to replicate those speedups so you can be as excited as to PyTorch 2.0 as we are.\n\n## Requirements and Setup\n\nFor GPU (newer generation GPUs will see drastically better performance)\n\n```\npip3 install numpy --pre torch --force-reinstall --extra-index-url https://download.pytorch.org/whl/nightly/cu117\n\n```\n\nFor CPU\n\n```\npip3 install --pre torch --extra-index-url https://download.pytorch.org/whl/nightly/cpu\n\n```\n\nOptional: Verify Installation\n\n```\ngit clone https://github.com/pytorch/pytorch\ncd tools/dynamo\npython verify_dynamo.py\n```\n\nOptional: Docker installation\n\nWe also provide all the required dependencies in the PyTorch nightly\nbinaries which you can download with", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
+{"page_content": "```\ndocker pull ghcr.io/pytorch/pytorch-nightly\n\n```\n\nAnd for ad hoc experiments just make sure that your container has access\nto all your GPUs\n\n```\ndocker run --gpus all -it ghcr.io/pytorch/pytorch-nightly:latest /bin/bash\n\n```\n\n## Getting started\n\n### a toy exmaple\n\nLet\u2019s start with a simple example and make things more complicated step\nby step. Please note that you\u2019re likely to see more significant speedups the newer your GPU is.\n\n```python\nimport torch\ndef fn(x, y):\n a = torch.sin(x).cuda()\n b = torch.sin(y).cuda()\n return a + b\nnew_fn = torch.compile(fn, backend=\"inductor\")\ninput_tensor = torch.randn(10000).to(device=\"cuda:0\")\na = new_fn(input_tensor, input_tensor)\n```\n\nThis example won\u2019t actually run faster but it\u2019s educational.\n\nexample that features `torch.cos()` and `torch.sin()` which are examples of pointwise ops as in they operate element by element on a vector. A more famous pointwise op you might actually want to use would be something like `torch.relu()`.", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
+{"page_content": "Pointwise ops in eager mode are suboptimal because each one would need to read a tensor from memory, make some changes and then write back those changes.\n\nThe single most important optimization that PyTorch 2.0 does for you is fusion.\n\nSo back to our example we can turn 2 reads and 2 writes into 1 read and 1 write which is crucial especially for newer GPUs where the bottleneck is memory bandwidth (how quickly you can send data to a GPU) instead of compute (how quickly your GPU can crunch floating point operations)\n\nThe second most important optimization that PyTorch 2.0 does for you is CUDA graphs\n\nCUDA graphs help eliminate the overhead from launching individual kernels from a python program.", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
+{"page_content": "torch.compile() supports many different backends but one that we\u2019re particularly excited about is Inductor which generates Triton kernels [https://github.com/openai/triton](https://github.com/openai/triton) which are written in Python yet outperform the vast majority of handwritten CUDA kernels. Suppose our example above was called trig.py we can actually inspect the code generated triton kernels by running.\n\n```\nTORCH_COMPILE_DEBUG=1 python trig.py\n```\n\n```python", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
+{"page_content": "```python\n\n@pointwise(size_hints=[16384], filename=__file__, meta={'signature': {0: '*fp32', 1: '*fp32', 2: 'i32'}, 'device': 0, 'constants': {}, 'configs': [instance_descriptor(divisible_by_16=(0, 1, 2), equal_to_1=())]})\n@triton.jit\ndef kernel(in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):\n xnumel = 10000\n xoffset = tl.program_id(0) * XBLOCK\n xindex = xoffset + tl.reshape(tl.arange(0, XBLOCK), [XBLOCK])\n xmask = xindex < xnumel\n x0 = xindex\n tmp0 = tl.load(in_ptr0 + (x0), xmask)\n tmp1 = tl.sin(tmp0)\n tmp2 = tl.sin(tmp1)\n tl.store(out_ptr0 + (x0 + tl.zeros([XBLOCK], tl.int32)), tmp2, xmask)\n\n```\n\nAnd you can verify that fusing the two `sins` did actually occur because the two `sin` operations occur within a single Triton kernel and the temporary variables are held in registers with very fast access.\n\n### a real model\n\nAs a next step let\u2019s try a real model like resnet50 from the PyTorch hub.", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
+{"page_content": "```python\nimport torch\nmodel = torch.hub.load('pytorch/vision:v0.10.0', 'resnet18', pretrained=True)\nopt_model = torch.compile(model, backend=\"inductor\")\nmodel(torch.randn(1,3,64,64))\n\n```\n\nIf you actually run you may be surprised that the first run is slow and that\u2019s because the model is being compiled. Subsequent runs will be faster so it's common practice to warm up your model before you start benchmarking it.\n\nYou may have noticed how we also passed in the name of a compiler explicitly here with \u201cinductor\u201d but it\u2019s not the only available backend, you can run in a REPL `torch._dynamo.list_backends()` to see the full list of available backends. For fun you should try out `aot_cudagraphs` or `nvfuser`.\n\n### Hugging Face models", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
+{"page_content": "Let\u2019s do something a bit more interesting now, our community frequently\nuses pretrained models from transformers [https://github.com/huggingface/transformers](https://github.com/huggingface/transformers) or TIMM [https://github.com/rwightman/pytorch-image-models](https://github.com/rwightman/pytorch-image-models) and one of our design goals for PyTorch 2.0 was that any new compiler stack needs to work out of the box with the vast majority of models people actually run.\n\nSo we\u2019re going to directly download a pretrained model from the Hugging Face hub and optimize it\n\n```python", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
+{"page_content": "```python\n\nimport torch\nfrom transformers import BertTokenizer, BertModel\n# Copy pasted from here https://huggingface.co/bert-base-uncased\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\nmodel = BertModel.from_pretrained(\"bert-base-uncased\").to(device=\"cuda:0\")\nmodel = torch.compile(model) # This is the only line of code that we changed\ntext = \"Replace me by any text you'd like.\"\nencoded_input = tokenizer(text, return_tensors='pt').to(device=\"cuda:0\")\noutput = model(**encoded_input)\n\n```\n\nIf you remove the `to(device=\"cuda:0\")` from the model and `encoded_input` then PyTorch 2.0 will generate C++ kernels that will be optimized for running on your CPU. You can inspect both Triton or C++ kernels for BERT, they\u2019re obviously more complex than the trigonometry example we had above but you can similarly skim it and understand if you understand PyTorch.\n\nThe same code also works just fine if used with [https://github.com/huggingface/accelerate](https://github.com/huggingface/accelerate) and DDP", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
+{"page_content": "Similarly let\u2019s try out a TIMM example\n\n```python\nimport timm\nimport torch\nmodel = timm.create_model('resnext101_32x8d', pretrained=True, num_classes=2)\nopt_model = torch.compile(model, backend=\"inductor\")\nopt_model(torch.randn(64,3,7,7))\n```\n\nOur goal with PyTorch was to build a breadth-first compiler that would speed up the vast majority of actual models people run in open source. The Hugging Face Hub ended up being an extremely valuable benchmarking tool for us, ensuring that any optimization we work on actually helps accelerate models people want to run.\n\nSo please try out PyTorch 2.0, enjoy the free perf and if you\u2019re not seeing it then please open an issue and we will make sure your model is supported [https://github.com/pytorch/torchdynamo/issues](https://github.com/pytorch/torchdynamo/issues)\n\nAfter all, we can\u2019t claim we\u2019re created a breadth-first unless YOUR models actually run faster.", "metadata": {"source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch 1.6 now includes Stochastic Weight Averaging'\nauthor: Pavel Izmailov, Andrew Gordon Wilson and Vincent Queneneville-Belair\n---\n\nDo you use stochastic gradient descent (SGD) or Adam? Regardless of the procedure you use to train your neural network, you can likely achieve significantly better generalization at virtually no additional cost with a simple new technique now natively supported in PyTorch 1.6, Stochastic Weight Averaging (SWA) [1]. Even if you have already trained your model, it\u2019s easy to realize the benefits of SWA by running SWA for a small number of epochs starting with a pre-trained model. [Again](https://twitter.com/MilesCranmer/status/1282140440892932096) and [again](https://twitter.com/leopd/status/1285969855062192129), researchers are discovering that SWA improves the performance of well-tuned models in a wide array of practical applications with little cost or effort!", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "SWA has a wide range of applications and features:\n* SWA significantly improves performance compared to standard training techniques in computer vision (e.g., VGG, ResNets, Wide ResNets and DenseNets on ImageNet and CIFAR benchmarks [1, 2]).\n* SWA provides state-of-the-art performance on key benchmarks in semi-supervised learning and domain adaptation [2].\n* SWA was shown to improve performance in language modeling (e.g., AWD-LSTM on WikiText-2 [4]) and policy-gradient methods in deep reinforcement learning [3].\n* SWAG, an extension of SWA, can approximate Bayesian model averaging in Bayesian deep learning and achieves state-of-the-art uncertainty calibration results in various settings. Moreover, its recent generalization MultiSWAG provides significant additional performance gains and mitigates double-descent [4, 10]. Another approach, Subspace Inference, approximates the Bayesian posterior in a small subspace of the parameter space around the SWA solution [5].\n* SWA for low precision training, SWALP, can match the performance of full-precision SGD training, even with all numbers quantized down to 8 bits, including gradient accumulators [6].\n* SWA in parallel, SWAP, was shown to greatly speed up the training of neural networks by using large batch sizes and, in particular, set a record by training a neural network to 94% accuracy on CIFAR-10 in 27 seconds [11].", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "\n

\n
\n\n**Figure 1**. *Illustrations of SWA and SGD with a Preactivation ResNet-164 on CIFAR-100 [1]. **Left**: test error surface for three FGE samples and the corresponding SWA solution (averaging in weight space). **Middle** and **Right**: test error and train loss surfaces showing the weights proposed by SGD (at convergence) and SWA, starting from the same initialization of SGD after 125 training epochs. Please see [1] for details on how these figures were constructed*.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "In short, SWA performs an equal average of the weights traversed by SGD (or any stochastic optimizer) with a modified learning rate schedule (see the left panel of Figure 1.). SWA solutions end up in the center of a wide flat region of loss, while SGD tends to converge to the boundary of the low-loss region, making it susceptible to the shift between train and test error surfaces (see the middle and right panels of Figure 1). We emphasize that SWA **can be used with any optimizer, such as Adam, and is not specific to SGD**.\n\nPreviously, SWA was in PyTorch contrib. In PyTorch 1.6, we provide a new convenient implementation of SWA in [torch.optim.swa_utils](https://pytorch.org/docs/stable/optim.html#stochastic-weight-averaging).\n\n## Is this just Averaged SGD?", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "At a high level, averaging SGD iterates dates back several decades in convex optimization [7, 8], where it is sometimes referred to as Polyak-Ruppert averaging, or averaged SGD. **But the details matter**. Averaged SGD is often used in conjunction with a decaying learning rate, and an exponential moving average (EMA), typically for convex optimization. In convex optimization, the focus has been on improved rates of convergence. In deep learning, this form of averaged SGD smooths the trajectory of SGD iterates but does not perform very differently.\n\nBy contrast, SWA uses an **equal average** of SGD iterates with a modified **cyclical or high constant learning rate** and exploits the flatness of training objectives [8] specific to **deep learning** for **improved generalization**. \n\n## How does Stochastic Weight Averaging Work?", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "There are two important ingredients that make SWA work. First, SWA uses a **modified learning rate** schedule so that SGD (or other optimizers such as Adam) continues to bounce around the optimum and explore diverse models instead of simply converging to a single solution. For example, we can use the standard decaying learning rate strategy for the first 75% of training time and then set the learning rate to a reasonably high constant value for the remaining 25% of the time (see Figure 2 below). The second ingredient is to take an average of the weights **(typically an equal average)** of the networks traversed by SGD. For example, we can maintain a running average of the weights obtained at the end of every epoch within the last 25% of training time (see Figure 2). After training is complete, we then set the weights of the network to the computed SWA averages.\n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "**Figure 2**. *Illustration of the learning rate schedule adopted by SWA. Standard decaying schedule is used for the first 75% of the training and then a high constant value is used for the remaining 25%. The SWA averages are formed during the last 25% of training*.\n\nOne important detail is the batch normalization. Batch normalization layers compute running statistics of activations during training. Note that the SWA averages of the weights are never used to make predictions during training. So the batch normalization layers do not have the activation statistics computed at the end of training. We can compute these statistics by doing a single forward pass on the train data with the SWA model.\n\nWhile we focus on SGD for simplicity in the description above, SWA can be combined with any optimizer. You can also use cyclical learning rates instead of a high constant value (see e.g., [2]).\n\n## How to use SWA in PyTorch?", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "In `torch.optim.swa_utils` we implement all the SWA ingredients to make it convenient to use SWA with any model. In particular, we implement `AveragedModel` class for SWA models, `SWALR` learning rate scheduler, and `update_bn` utility function to update SWA batch normalization statistics at the end of training. \n\nIn the example below, `swa_model` is the SWA model that accumulates the averages of the weights. We train the model for a total of 300 epochs, and we switch to the SWA learning rate schedule and start to collect SWA averages of the parameters at epoch 160. \n\n```python\nfrom torch.optim.swa_utils import AveragedModel, SWALR\nfrom torch.optim.lr_scheduler import CosineAnnealingLR\n\nloader, optimizer, model, loss_fn = ...\nswa_model = AveragedModel(model)\nscheduler = CosineAnnealingLR(optimizer, T_max=100)\nswa_start = 5\nswa_scheduler = SWALR(optimizer, swa_lr=0.05)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "for epoch in range(100):\n for input, target in loader:\n optimizer.zero_grad()\n loss_fn(model(input), target).backward()\n optimizer.step()\n if epoch > swa_start:\n swa_model.update_parameters(model)\n swa_scheduler.step()\n else:\n scheduler.step()\n\n# Update bn statistics for the swa_model at the end\ntorch.optim.swa_utils.update_bn(loader, swa_model)\n# Use swa_model to make predictions on test data \npreds = swa_model(test_input)\n```\n\nNext, we explain each component of `torch.optim.swa_utils` in detail.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "`AveragedModel` class serves to compute the weights of the SWA model. You can create an averaged model by running `swa_model = AveragedModel(model)`. You can then update the parameters of the averaged model by `swa_model.update_parameters(model)`. By default, `AveragedModel` computes a running equal average of the parameters that you provide, but you can also use custom averaging functions with the `avg_fn` parameter. In the following example, `ema_model` computes an exponential moving average.\n\n```python\nema_avg = lambda averaged_model_parameter, model_parameter, num_averaged:\\\n0.1 * averaged_model_parameter + 0.9 * model_parameter\nema_model = torch.optim.swa_utils.AveragedModel(model, avg_fn=ema_avg)\n```\n\nIn practice, we find an equal average with the modified learning rate schedule in Figure 2 provides the best performance.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "`SWALR` is a learning rate scheduler that anneals the learning rate to a fixed value, and then keeps it constant. For example, the following code creates a scheduler that linearly anneals the learning rate from its initial value to `0.05` in `5` epochs within each parameter group.\n\n```python\nswa_scheduler = torch.optim.swa_utils.SWALR(optimizer, \nanneal_strategy=\"linear\", anneal_epochs=5, swa_lr=0.05)\n\n```\nWe also implement cosine annealing to a fixed value (`anneal_strategy=\"cos\"`). In practice, we typically switch to `SWALR` at epoch `swa_start` (e.g. after 75% of the training epochs), and simultaneously start to compute the running averages of the weights:\n\n```python\nscheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=100)\nswa_start = 75\nfor epoch in range(100):\n # \n if i > swa_start:\n swa_model.update_parameters(model)\n swa_scheduler.step()\n else:\n scheduler.step()\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "Finally, `update_bn` is a utility function that computes the batchnorm statistics for the SWA model on a given dataloader `loader`:\n```\ntorch.optim.swa_utils.update_bn(loader, swa_model) \n```\n`update_bn` applies the `swa_model` to every element in the dataloader and computes the activation statistics for each batch normalization layer in the model.\n\nOnce you computed the SWA averages and updated the batch normalization layers, you can apply `swa_model` to make predictions on test data.\n\n## Why does it work?", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "## Why does it work?\n\nThere are large flat regions of the loss surface [9]. In Figure 3 below, we show a visualization of the loss surface in a subspace of the parameter space containing a path connecting two independently trained SGD solutions, such that the loss is similarly low at every point along the path. SGD converges near the boundary of these regions because there isn\u2019t much gradient signal to move inside, as the points in the region all have similarly low values of loss. By increasing the learning rate, SWA spins around this flat region, and then by averaging the iterates, moves towards the center of the flat region.\n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "**Figure 3**: *visualization of mode connectivity for ResNet-20 with no skip connections on CIFAR-10 dataset. The visualization is created in collaboration with Javier Ideami [(https://losslandscape.com/)](https://losslandscape.com/). For more details, see this [blogpost](https://izmailovpavel.github.io/curves_blogpost/)*.\n\nWe expect solutions that are centered in the flat region of the loss to generalize better than those near the boundary. Indeed, train and test error surfaces are not perfectly aligned in the weight space. Solutions that are centered in the flat region are not as susceptible to the shifts between train and test error surfaces as those near the boundary. In Figure 4 below, we show the train loss and test error surfaces along the direction connecting the SWA and SGD solutions. As you can see, while the SWA solution has a higher train loss compared to the SGD solution, it is centered in a region of low loss and has a substantially better test error.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "\n

\n
\n\n**Figure 4**. *Train loss and test error along the line connecting the SWA solution (circle) and SGD solution (square). The SWA solution is centered in a wide region of low train loss, while the SGD solution lies near the boundary. Because of the shift between train loss and test error surfaces, the SWA solution leads to much better generalization*.\n\n## What are the results achieved with SWA?\n\nWe release a GitHub [repo](https://github.com/izmailovpavel/torch_swa_examples) with examples using the PyTorch implementation of SWA for training DNNs. For example, these examples can be used to achieve the following results on CIFAR-100:\n\n\n {:.table.table-striped.table-bordered}\n | | VGG-16 | ResNet-164 | WideResNet-28x10 | \n| ------------- | ------------- | ------------- | ------------- |\n| SGD | 72.8 \u00b1 0.3 | 78.4 \u00b1 0.3 | 81.0 \u00b1 0.3 | \n| SWA | 74.4 \u00b1 0.3 | 79.8 \u00b1 0.4 | 82.5 \u00b1 0.2 |", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "## Semi-Supervised Learning\n\nIn a follow-up [paper](https://arxiv.org/abs/1806.05594) SWA was applied to semi-supervised learning, where it improved the best reported results in multiple settings [2]. For example, with SWA you can get 95% accuracy on CIFAR-10 if you only have the training labels for 4k training data points (the previous best reported result on this problem was 93.7%). This paper also explores averaging multiple times within epochs, which can accelerate convergence and find still flatter solutions in a given time.\n\n\n

\n
\n**Figure 5**. Performance of fast-SWA on semi-supervised learning with CIFAR-10. fast-SWA achieves record results in every setting considered.\n\n## Reinforcement Learning", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "In another follow-up [paper](http://www.gatsby.ucl.ac.uk/~balaji/udl-camera-ready/UDL-24.pdf) SWA was shown to improve the performance of policy gradient methods A2C and DDPG on several Atari games and MuJoCo environments [3]. This application is also an instance of where SWA is used with Adam. Recall that SWA is not specific to SGD and can benefit essentially any optimizer.\n\n\n{:.table.table-striped.table-bordered}\n | Environment Name | A2C | A2C + SWA | \n| ------------- | ------------- | ------------- | \n| Breakout | 522 \u00b1 34 | 703 \u00b1 60 |\n| Qbert | 18777 \u00b1 778 | 21272 \u00b1 655 |\n| SpaceInvaders | 7727 \u00b1 1121 | 21676 \u00b1 8897 |\n| Seaquest | 1779 \u00b1 4 | 1795 \u00b1 4 |\n| BeamRider | 9999 \u00b1 402 | 11321 \u00b1 1065 |\n| CrazyClimber | 147030 \u00b1 10239 | 139752 \u00b1 11618 |\n\n\n## Low Precision Training", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "We can filter through quantization noise by combining weights that have been rounded down with weights that have been rounded up. Moreover, by averaging weights to find a flat region of the loss surface, large perturbations of the weights will not affect the quality of the solution (Figures 9 and 10). Recent [work](https://arxiv.org/abs/1904.11943) shows that by adapting SWA to the low precision setting, in a method called SWALP, one can match the performance of full-precision SGD even with all training in 8 bits [5]. This is quite a practically important result, given that (1) SGD training in 8 bits performs notably worse than full precision SGD, and (2) low precision training is significantly harder than predictions in low precision after training (the usual setting). For example, a ResNet-164 trained on CIFAR-100 with float (16-bit) SGD achieves 22.2% error, while 8-bit SGD achieves 24.0% error. By contrast, SWALP with 8 bit training achieves 21.8% error.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "\n

\n
\n**Figure 9**. *Quantizing a solution leads to a perturbation of the weights which has a greater effect on the quality of the sharp solution (left) compared to wide solution (right)*. \n\n\n\n

\n
\n**Figure 10**. *The difference between standard low precision training and SWALP*.\n\nAnother [work](https://arxiv.org/abs/2002.00343), SQWA, presents an approach for quantization and fine-tuning of neural networks in low precision [12]. In particular, SQWA achieved state-of-the-art results for DNNs quantized to 2 bits on CIFAR-100 and ImageNet.\n\n## Calibration and Uncertainty Estimates\n\nBy finding a centred solution in the loss, SWA can also improve calibration and uncertainty representation. Indeed, SWA can be viewed as an approximation to an ensemble, resembling a Bayesian model average, but with a single model [1].", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "SWA can be viewed as taking the first moment of SGD iterates with a modified learning rate schedule. We can directly generalize SWA by also taking the second moment of iterates to form a Gaussian approximate posterior over the weights, further characterizing the loss geometry with SGD iterates. This approach,[SWA-Gaussian (SWAG)](https://arxiv.org/abs/1902.02476) is a simple, scalable and convenient approach to uncertainty estimation and calibration in Bayesian deep learning [4]. The SWAG distribution approximates the shape of the true posterior: Figure 6 below shows the SWAG distribution and the posterior log-density for ResNet-20 on CIFAR-10.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "\n

\n
\n**Figure 6**. *SWAG posterior approximation and the loss surface for a ResNet-20 without skip-connections trained on CIFAR-10 in the subspace formed by the two largest eigenvalues of the SWAG covariance matrix. The shape of SWAG distribution is aligned with the posterior: the peaks of the two distributions coincide, and both distributions are wider in one direction than in the orthogonal direction. Visualization created in collaboration with* [Javier Ideami](https://losslandscape.com/).\n\nEmpirically, SWAG performs on par or better than popular alternatives including MC dropout, KFAC Laplace, and temperature scaling on uncertainty quantification, out-of-distribution detection, calibration and transfer learning in computer vision tasks. Code for SWAG is available [here](https://github.com/wjmaddox/swa_gaussian).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "\n

\n
\n**Figure 7**. *MultiSWAG generalizes SWAG and deep ensembles, to perform Bayesian model averaging over multiple basins of attraction, leading to significantly improved performance. By contrast, as shown here, deep ensembles select different modes, while standard variational inference (VI) marginalizes (model averages) within a single basin*.\n\nMultiSWAG [9] uses multiple independent SWAG models to form a mixture of Gaussians as an approximate posterior distribution. Different basins of attraction contain highly complementary explanations of the data. Accordingly, marginalizing over these multiple basins provides a significant boost in accuracy and uncertainty representation. MultiSWAG can be viewed as a generalization of deep ensembles, but with performance improvements.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "Indeed, we see in Figure 8 that MultiSWAG entirely mitigates double descent -- more flexible models have monotonically improving performance -- and provides significantly improved generalization over SGD. For example, when the ResNet-18 has layers of width 20, Multi-SWAG achieves under 30% error whereas SGD achieves over 45%, more than a 15% gap! \n\n\n

\n
\n**Figure 8**. *SGD, SWAG, and Multi-SWAG on CIFAR-100 for a ResNet-18 with varying widths. We see Multi-SWAG in particular mitigates double descent and provides significant accuracy improvements over SGD*.\n\nReference [10] also considers Multi-SWA, which uses multiple independently trained SWA solutions in an ensemble, providing performance improvements over deep ensembles without any additional computational cost. Code for MultiSWA and MultiSWAG is available [here](https://github.com/izmailovpavel/understandingbdl).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "Another [method](https://arxiv.org/abs/1907.07504), Subspace Inference, constructs a low-dimensional subspace around the SWA solution and marginalizes the weights in this subspace to approximate the Bayesian model average [5]. Subspace Inference uses the statistics from the SGD iterates to construct both the SWA solution and the subspace. The method achieves strong performance in terms of prediction accuracy and uncertainty calibration both in classification and regression problems. Code is available [here](https://github.com/wjmaddox/drbayes).\n\n## Try it Out!", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "## Try it Out!\n\nOne of the greatest open questions in deep learning is why SGD manages to find good solutions, given that the training objectives are highly multimodal, and there are many settings of parameters that achieve no training loss but poor generalization. By understanding geometric features such as flatness, which relate to generalization, we can begin to resolve these questions and build optimizers that provide even better generalization, and many other useful features, such as uncertainty representation. We have presented SWA, a simple drop-in replacement for standard optimizers such as SGD and Adam, which can in principle, benefit anyone training a deep neural network. SWA has been demonstrated to have a strong performance in several areas, including computer vision, semi-supervised learning, reinforcement learning, uncertainty representation, calibration, Bayesian model averaging, and low precision training.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "We encourage you to try out SWA! SWA is now as easy as any standard training in PyTorch. And even if you have already trained your model, you can use SWA to significantly improve performance by running it for a small number of epochs from a pre-trained model. \n\n\n[1] Averaging Weights Leads to Wider Optima and Better Generalization; Pavel Izmailov, Dmitry Podoprikhin, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson; Uncertainty in Artificial Intelligence (UAI), 2018.\n\n[2] There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average; Ben Athiwaratkun, Marc Finzi, Pavel Izmailov, Andrew Gordon Wilson; \nInternational Conference on Learning Representations (ICLR), 2019.\n\n[3] Improving Stability in Deep Reinforcement Learning with Weight Averaging; Evgenii Nikishin, Pavel Izmailov, Ben Athiwaratkun, Dmitrii Podoprikhin, \nTimur Garipov, Pavel Shvechikov, Dmitry Vetrov, Andrew Gordon Wilson; UAI 2018 Workshop: Uncertainty in Deep Learning, 2018.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "[4] A Simple Baseline for Bayesian Uncertainty in Deep Learning\nWesley Maddox, Timur Garipov, Pavel Izmailov, Andrew Gordon Wilson; Neural Information Processing Systems (NeurIPS), 2019.\n\n[5] Subspace Inference for Bayesian Deep Learning\nPavel Izmailov, Wesley Maddox, Polina Kirichenko, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson\nUncertainty in Artificial Intelligence (UAI), 2019.\n\n[6] SWALP : Stochastic Weight Averaging in Low Precision Training\nGuandao Yang, Tianyi Zhang, Polina Kirichenko, Junwen Bai, \nAndrew Gordon Wilson, Christopher De Sa; International Conference on Machine Learning (ICML), 2019.\n\n[7] David Ruppert. Efficient estimations from a slowly convergent Robbins-Monro process; Technical report, Cornell University Operations Research and Industrial Engineering, 1988.\n\n[8] Acceleration of stochastic approximation by averaging. Boris T Polyak and Anatoli B Juditsky; SIAM Journal on Control and Optimization, 30(4):838\u2013855, 1992.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "[9] Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs\nTimur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov, \nAndrew Gordon Wilson. Neural Information Processing Systems (NeurIPS), 2018.\n\n[10] Bayesian Deep Learning and a Probabilistic Perspective of Generalization\nAndrew Gordon Wilson, Pavel Izmailov. ArXiv preprint, 2020.\n\n[11] Stochastic Weight Averaging in Parallel: Large-Batch Training That Generalizes Well\nGupta, Vipul, Santiago Akle Serrano, and Dennis DeCoste; International Conference on Learning Representations (ICLR). 2019.\n\n[12] SQWA: Stochastic Quantized Weight Averaging for Improving the Generalization Capability of Low-Precision Deep Neural Networks\nShin, Sungho, Yoonho Boo, and Wonyong Sung; arXiv preprint 2020.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"Introducing TorchRec, and other domain library updates in PyTorch 1.11\"\nauthor: Team PyTorch\nfeatured-img: \"assets/images/pytorch-logo.jpg\"\n---\n\nWe are introducing the beta release of TorchRec and a number of improvements to the current PyTorch domain libraries, alongside the [PyTorch 1.11 release](https://pytorch.org/blog/pytorch-1.11-released/). These updates demonstrate our focus on developing common and extensible APIs across all domains to make it easier for our community to build ecosystem projects on PyTorch. Highlights include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "- **TorchRec**, a PyTorch domain library for Recommendation Systems, is available in beta. [View it on GitHub](https://github.com/pytorch/torchrec).\n- **TorchAudio** - Added Enformer- and RNN-T-based models and recipes to support the full development lifecycle of a streaming ASR model. See the release notes [here](https://github.com/pytorch/audio/releases).\n- **TorchText** - Added beta support for RoBERTa and XLM-R models, byte-level BPE tokenizer, and text datasets backed by TorchData. See the release notes [here](https://github.com/pytorch/text/releases).\n- **TorchVision** - Added 4 new model families and 14 new classification datasets such as CLEVR, GTSRB, FER2013. See the release notes [here](https://github.com/pytorch/vision/releases).\n\n## TorchRec 0.1", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "## TorchRec 0.1\n\nWe [announced TorchRec](https://pytorch.org/blog/introducing-torchrec/) a few weeks ago and we are excited to release the beta version today. To recap, TorchRec is a PyTorch domain library for Recommendation Systems. This new library provides common sparsity and parallelism primitives, enabling researchers to build state-of-the-art personalization models and deploy them in production. TorchRec was used to train a 1.25 trillion parameter model, pushed to production in January 2022.\n\nIn particular, the library includes:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "- Modeling primitives, such as embedding bags and jagged tensors, that enable easy authoring of large, performant multi-device/multi-node models using hybrid data-parallelism and model-parallelism.\n- Optimized RecSys kernels powered by [FBGEMM](https://github.com/pytorch/FBGEMM), including support for sparse and quantized operations.\n- A sharder which can partition embedding tables with a variety of different strategies including data-parallel, table-wise, row-wise, table-wise-row-wise, and column-wise sharding.\n- A planner which can automatically generate optimized sharding plans for models.\n- Pipelining to overlap dataloading device transfer (copy to GPU), inter-device communications (input_dist), and computation (forward, backward) for increased performance.\n- GPU inference support.\n- Common modules for RecSys, such as models and public datasets (Criteo & Movielens).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "Please check the TorchRec announcement post [here](https://pytorch.org/blog/introducing-torchrec/), [video tutorial](https://www.youtube.com/watch?v=cjgj41dvSeQ), install instructions [here](https://github.com/pytorch/torchrec#readme), test drive the feature through this tutorial [here](https://pytorch.org/tutorials/intermediate/torchrec_tutorial.html), and refer to the reference document [here](https://pytorch.org/torchrec/).\n\n## TorchAudio 0.11\n\n#### TorchAudio: Building Blocks for Audio and Speech Processing\n\nWe published a paper, [TorchAudio: Building Blocks for Audio and Speech Processing](https://arxiv.org/abs/2110.15018), describing the overview of the TorchAudio library. If you find TorchAudio useful for your research, please help us share with the community by citing our paper.\n\n#### (Beta) RNN-T & (Prototype) Emformer Models and Recipes\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "Emformer is an efficient memory-transformer-based streaming acoustic model that has demonstrated state-of-the-art streaming automatic speech recognition (ASR) performance in low-latency, resource-constrained scenarios, such as on-device applications (citation: [https://arxiv.org/abs/2010.10759](https://arxiv.org/abs/2010.10759)).\n\nThe TorchAudio v0.11 release includes the following beta features:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "- Implementation of Emformer ([docs](https://pytorch.org/audio/main/models.html#emformer))\n- Recurrent neural network transducer (RNN-T) streaming ASR model that uses Emformer for its transcription network ([docs](https://pytorch.org/audio/main/models.html#rnn-t))\n- RNN-T beam search decoder with TorchScript support ([docs](https://pytorch.org/audio/main/models.html#rnntbeamsearch))\n- LibriSpeech Emformer RNN-T training recipe ([GitHub](https://github.com/pytorch/audio/tree/release/0.11/examples/asr/librispeech_emformer_rnnt)) and corresponding pre-trained streaming ASR inference pipeline ([docs](https://pytorch.org/audio/main/pipelines.html#emformer-rnnt-base-librispeech))\n\nAlso there are prototype features that are available from nightly builds or the main branch.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "- Training recipes trained on MuST-C and TED-LIUM3 datasets. ([GitHub](https://github.com/pytorch/audio/tree/main/examples/asr/emformer_rnnt))\n- Pre-trained pipelines corresponding to the recipes. ([docs](https://pytorch.org/audio/main/prototype.pipelines.html))\n- Tutorial that steps through performing online speech recognition with RNN-T Emformer model. ([docs](https://pytorch.org/audio/main/tutorials/online_asr_tutorial.html))\n\nCollectively, these features cover the full development lifecycle of a streaming ASR model, from definition through training and inference, and enable users to easily develop their own Emformer- and RNN-T-based models.\n\nSpecial thanks to Yangyang Shi, Jay Mahadeokar, and Gil Keren for their code contributions and guidance.\n\n#### (Beta) HuBERT Pretrain Model", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "The masked prediction training of HuBERT model requires the masked logits, unmasked logits, and feature norm as the outputs. The logits are for cross-entropy losses and the feature norm is for penalty loss. The release adds [HuBERTPretrainModel](https://github.com/pytorch/audio/blob/main/torchaudio/models/wav2vec2/model.py#L120-L205) and corresponding factory functions ([hubert_pretrain_base](https://github.com/pytorch/audio/blob/main/torchaudio/models/wav2vec2/model.py#L964-L1027), [hubert_pretrain_large](https://github.com/pytorch/audio/blob/main/torchaudio/models/wav2vec2/model.py#L1030-L1090), and [hubert_pretrain_xlarge](https://github.com/pytorch/audio/blob/main/torchaudio/models/wav2vec2/model.py#L1093-L1153)) to enable training from scratch.\n\n#### (Prototype) CTC Beam Search Decoder\n\nIn recent releases, TorchAudio has added support for ASR models fine-tuned on CTC loss. The addition of an inference time CTC beam search decoder enables running end-to-end ASR evaluation using TorchAudio utils.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "The CTC decoder in TorchAudio supports customizable beam search decoding with lexicon constraint. It also has optional KenLM language model support.\n\nFor more details, please check out the [API tutorial](https://pytorch.org/audio/main/tutorials/asr_inference_with_ctc_decoder_tutorial.html) and [documentation](https://pytorch.org/audio/main/prototype.ctc_decoder.html). This prototype feature is available through nightly builds.\n\n#### (Prototype) Streaming API\n\nTorchAudio started as simple audio I/O APIs that supplement PyTorch. With the recent addition of ASR models and training recipes, the project has received requests to support high-level application development.\n\nStreaming API makes it easy to develop and test the model in online inference. It utilizes ffmpeg under the hood, and enables reading media from online services and hardware devices, decoding media in an incremental manner, and applying filters and preprocessing.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "Please checkout the [API tutorial](https://pytorch.org/audio/main/tutorials/streaming_api_tutorial.html) and [the documentation](https://pytorch.org/audio/main/prototype.io.html). There are also the [streaming ASR](https://pytorch.org/audio/main/tutorials/online_asr_tutorial.html) tutorial and the [device streaming ASR tutorial](https://pytorch.org/audio/main/tutorials/device_asr.html). This feature is available from nightly releases. Please refer to [pytorch.org](https://pytorch.org/get-started/locally/) for how to install nightly builds.\n\n## TorchText 0.12\n\n#### (Beta) RoBERTa and XLM-R Models\n\nTorchText has added support for pre-trained RoBERTa and XLM-R models. It would allow users to train end-2-end Transformer Encoder based models on standard NLP tasks using TorchText.\n\nMore specifically:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "More specifically:\n\n- The models are torchscriptable and hence can be employed for production use-cases.\n- The model APIs let users to easily attach custom task-specific heads with pre-trained encoders.\n- The API also comes equipped with data pre-processing transforms to match the pre-trained weights and model configuration.\n\nWe have added a [tutorial](https://pytorch.org/text/main/tutorials/sst2_classification_non_distributed.html) to demonstrate SST-2 binary text classification task with pre-trained XLM-R base architecture.\n\nFor additional details on model APIs and usage examples, please refer to the [documentation](https://pytorch.org/text/main/models.html).\n\n#### (Beta) byte-level BPE tokenizer", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "TorchText has added support for a Byte-Level BPE tokenizer, as used in GPT-2. This tokenizer is also used for tokenizing inputs to the pre-trained RoBERTa models described previously. In addition to the RoBERTa vocab, users can also load their own custom BPE vocab to use the tokenizer. Furthermore, the tokenizer is fully torchscriptable and hence can be employed for production use-cases. For additional details on model APIs and usage examples, please refer to the [documentation](https://pytorch.org/text/main/transforms.html#gpt2bpetokenizer).\n\n#### (Beta) Text datasets backed by TorchData\n\nTorchText has modernized its datasets by migrating from older-style Iterable Datasets to [TorchData\u2019s](https://github.com/pytorch/data#readme) DataPipes. TorchData is a library that provides modular/composable primitives, allowing users to load and transform data in performant data pipelines.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "These DataPipes work out-of-the-box with PyTorch DataLoader and would enable new functionalities like auto-sharding. Users can now easily do data manipulation and pre-processing using user-defined functions and transformations in a functional style programming. Datasets backed by DataPipes also enable standard flow-control like batching, collation, shuffling and bucketizing.\n\nCollectively, DataPipes provides a comprehensive experience for data preprocessing and tensorization needs in a pythonic and flexible way for model training. We have added a [tutorial](https://pytorch.org/text/main/tutorials/sst2_classification_non_distributed.html) to demonstrate data-processing pipelining using the modernized dataset for binary text-classification.\n\nYou can learn more about TorchData DataPipe APIs in its [official documentation](https://pytorch.org/data).\n\n## TorchVision 0.12\n\n### New Models\n\nFour new model families have been released in the latest version along with pre-trained weights for their variants.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "#### #1 Object Detection\n\n[FCOS](https://arxiv.org/pdf/1904.01355.pdf) is a popular, fully convolutional, anchor-free model for object detection. In this release we include a community-contributed model implementation as well as pre-trained weights. The model was trained on COCO train2017 and can be used as follows:\n\n```python\nimport torch\nfrom torchvision import models\n\nx = [torch.rand(3, 224, 224)]\nfcos = models.detection.fcos_resnet50_fpn(pretrained=True).eval()\npredictions = fcos(x)\n```\n\nThe box AP of the pre-trained model on COCO val2017 is 39.2 (see [#4961](https://github.com/pytorch/vision/pull/4961) for more details).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "We would like to thank [Hu Ye](https://github.com/xiaohu2015) and [Zhiqiang Wang](https://github.com/zhiqwang) for contributing to the model implementation and initial training. This was the first community-contributed model in a long while, and given its success, we decided to use the learnings from this process and create a new [model contribution guidelines](https://github.com/pytorch/vision/blob/main/CONTRIBUTING_MODELS.md).\n\n#### #2 Optical Flow support and RAFT model\n\nTorchVision now supports optical flow! Optical Flow models try to predict movement in a video: given two consecutive frames, the model predicts where each pixel of the first frame ends up in the second frame. Check out our [new tutorial on Optical Flow](https://pytorch.org/vision/0.12/auto_examples/plot_optical_flow.html#sphx-glr-auto-examples-plot-optical-flow-py)!", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "We implemented a torchscript-compatible [RAFT](https://arxiv.org/abs/2003.12039) model with pre-trained weights (both normal and \u201csmall\u201d versions), and added support for [training and evaluating](https://github.com/pytorch/vision/tree/main/references/optical_flow) optical flow models. Our training scripts support distributed training across processes and nodes, leading to much faster training time than the original implementation. We also added 5 new [optical flow datasets](https://pytorch.org/vision/0.12/datasets.html#optical-flow): Flying Chairs, Flying Things, Sintel, Kitti, and HD1K.\n\n\n
\n
\n\n#### #3. Image Classification", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "[Vision Transformer](https://arxiv.org/abs/2010.11929) (ViT) and [ConvNeXt](https://arxiv.org/abs/2201.03545) are two popular architectures which can be used as image classifiers or as backbones for downstream vision tasks. In this release we include 8 pre-trained weights for their classification variants. The models were trained on ImageNet and can be used as follows:\n\n```python\nimport torch\nfrom torchvision import models\n\nx = torch.rand(1, 3, 224, 224)\nvit = models.vit_b_16(pretrained=True).eval()\nconvnext = models.convnext_tiny(pretrained=True).eval()\npredictions1 = vit(x)\npredictions2 = convnext(x)\n```\n\nThe accuracies of the pre-trained models obtained on ImageNet val are seen below:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "| **Model** | **Acc@1** | **Acc@5** |\n| -------------- | --------: | --------: |\n| vit_b_16 | 81.072 | 95.318 |\n| vit_b_32 | 75.912 | 92.466 |\n| vit_l_16 | 79.662 | 94.638 |\n| vit_l_32 | 76.972 | 93.07 |\n| convnext_tiny | 82.52 | 96.146 |\n| convnext_small | 83.616 | 96.65 |\n| convnext_base | 84.062 | 96.87 |\n| convnext_large | 84.414 | 96.976 |\n\nThe above models have been trained using an adjusted version of our new [training recipe](https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/) and this allows us to offer models with accuracies significantly higher than the ones on the original papers.\n\n#### #4. GPU Video Decoding\n\nIn this release, we add support for GPU video decoding in the video reading API. To use hardware-accelerated decoding, we just need to pass a cuda device to the video reading API as shown below:\n\n```python\nimport torchvision", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "reader = torchvision.io.VideoReader(file_name, device=\"cuda:0\")\nfor frame in reader:\n print(frame)\n```\n\nWe also support seeking to anyframe or a keyframe in the video before reading, as shown below:\n\n```python\nreader.seek(seek_time)\n```\n\n### New Datasets\n\nWe have implemented 14 new [classification datasets](https://pytorch.org/vision/0.12/datasets.html#image-classification): CLEVR, GTSRB, FER2013, SUN397, Country211, Flowers102, fvgc_aircraft, OxfordIIITPet, DTD, Food 101, Rendered SST2, Stanford cars, PCAM, and EuroSAT.\n\nAs part of our work on Optical Flow support (see above for more details), we also added 5 new [optical flow datasets](https://pytorch.org/vision/0.12/datasets.html#optical-flow): Flying Chairs, Flying Things, Sintel, Kitti, and HD1K.\n\n### Other Updates", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "- **New documentation layout**: Each function / class is now documented in a separate page, clearing up some space in the per-module pages, and easing the discovery of the proposed APIs. Compare e.g. our [previous docs](https://pytorch.org/vision/0.11/transforms.html) vs the [new ones](https://pytorch.org/vision/0.12/transforms.html). Please let us know if you have any [feedback](https://github.com/pytorch/vision/issues/5511)!\n- **New [model contribution guidelines](https://github.com/pytorch/vision/blob/main/CONTRIBUTING_MODELS.md)** have been published following the success of the [FCOS](https://github.com/pytorch/vision/pull/4961) model which was contributed by the community. These guidelines aim to be an overview of the model contribution process for anyone who would like to suggest, implement and train a new model.\n- **Upcoming Prototype API** - We are currently working on a prototype API which adds Multi-weight support on all of our model builder methods. This will enable us to offer multiple pre-trained weights, associated with their meta-data and inference transforms. The API is still under review and thus was not included in the release but you can read more about it on our [blogpost](https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/) and provide your feedback on the dedicated [Github issue](https://github.com/pytorch/vision/issues/5088).\n- **Changes in our deprecation policy** - Up until now, torchvision would almost never remove deprecated APIs. In order to be more aligned and consistent with pytorch core, we are updating our deprecation policy. We are now following a 2-release deprecation cycle: deprecated APIs will raise a warning for 2 versions, and will be removed after that. To reflect these changes and to smooth the transition, we have decided to:\n - Remove all APIs that had been deprecated before or on v0.8, released 1.5 years ago.\n - Update the removal timeline of all other deprecated APIs to v0.14, to reflect the new 2-cycle policy starting now in v0.12.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "### Captum 0.5\n\n[Captum](https://captum.ai/) is a PyTorch library for model interpretability. For this release, we expanded Captum with influential instances and added support for both similarity based influences and novel algorithms, [TracIn](https://arxiv.org/abs/2002.08484) and its variants. TracIn variants offer faster approximation of influence scores based on random projections for fully connected layers.\n\nMore specifically the new, influence, subsection of Captum includes:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "- **[SimilarityInfluence](https://captum.ai/api/influence.html#similarityinfluence)** computes similarity scores between test and training examples using default (cosine or euclidean) or custom user definite metrics w.r.t. given input model layers.\n- **[TracInCP](https://captum.ai/api/influence.html#tracincp)** approximates the influential score of each training example on a given test example based on the dot-product similarity between loss gradients w.r.t. model parameters for test and training examples. Note that if we use training examples as test examples then we compute self influence. This method and its variants described below also return top-k proponents and opponents which are the top-k largest positive and negative influential examples respectively.\n- **[TracInCPFast](https://captum.ai/api/influence.html#tracincpfast)** is an approximation of TracInCP that avoids computing the gradients w.r.t. large parameter matrices. It approximates influence score based on the dot products between last fully connected layer activations and loss gradients w.r.t. that layer for training and test examples.\n- **[TracInCPFastRandProj](https://captum.ai/api/influence.html#tracincpfastrandproj)** uses a nearest neighbor approximation library such as annoy to compute the dot product between the training and test quantities. In order to reduce the dimensionality of layer activations and corresponding gradients this method, in addition, allows to project those vectors into a lower dimensional space using random projection matrices.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "More about the implementation of influential instances can be found on our [GitHub](https://github.com/pytorch/captum/tree/master/captum/influence) page and [tutorials](https://captum.ai/tutorials/TracInCP_Tutorial).\n\nThanks for reading, If you\u2019re interested in these updates and want to join the PyTorch community, we encourage you to join the [discussion forums](https://discuss.pytorch.org/) and [open GitHub issues](https://github.com/pytorch/pytorch/issues). To get the latest news from PyTorch, follow us on [Twitter](https://twitter.com/PyTorch), [Medium](https://medium.com/pytorch), [YouTube](https://www.youtube.com/pytorch), and [LinkedIn](https://www.linkedin.com/company/pytorch).\n\nCheers!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "\n \n
\n", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"}}
{"page_content": "---\nlayout: blog_detail\ntitle: \"PyTorch 2.0 & XLA\u2014The Latest Cutting Edge Features\"\nauthor: Jack Cao, Milad Mohammadi, Alex Wertheim, Yeounoh Chung, Joe Spisak, Will Cromar, Shauheen Zahirazami\n---", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "Today, we are excited to share our latest work for [PyTorch/XLA 2.0](https://github.com/pytorch/xla/releases/tag/v2.0.0). The release of [PyTorch 2.0](https://pytorch.org/get-started/pytorch-2.0/) is yet another major milestone for this storied community and we are excited to continue to be part of it. When the [PyTorch/XLA](https://github.com/pytorch/xla) project started in 2018 between Google and Meta, the focus was on bringing cutting edge Cloud TPUs to help support the PyTorch community. Along the way,", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "others in the community such as Amazon joined the project and very quickly the community expanded. We are excited about XLA's [direction](https://opensource.googleblog.com/2023/03/openxla-is-ready-to-accelerate-and-simplify-ml-development.html) and the benefits this project continues to bring to the PyTorch community. In this blog we\u2019d like to showcase some key features that have been in development, show code snippets, and illustrate the benefit through some benchmarks.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "TorchDynamo / torch.compile (Experimental)\n\n[TorchDynamo](https://github.com/pytorch/torchdynamo) (Dynamo) is a Python-level JIT compiler designed to make unmodified PyTorch programs faster. It provides a clean API for compiler backends to hook in; its biggest feature is to dynamically modify Python bytecode just before execution. In the PyTorch/XLA 2.0 release, an experimental backend for Dynamo is provided for both inference and training.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "Dynamo provides a [Torch FX](https://pytorch.org/docs/stable/fx.html) (FX) graph when it recognizes a model pattern and PyTorch/XLA uses a Lazy Tensor approach to compile the FX graph and return the compiled function. To get more insight regarding the technical details about PyTorch/XLA\u2019s dynamo implementation, check out [this](https://dev-discuss.pytorch.org/t/torchdynamo-update-10-integrating-with-pytorch-xla-for-inference-and-training/935) dev-discuss post and [dynamo", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "doc](https://github.com/pytorch/xla/blob/r2.0/docs/dynamo.md).", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "Here is a small code example of running ResNet18 with `torch.compile`:\n\n```\nimport torch\nimport torchvision\nimport torch_xla.core.xla_model as xm\n\ndef eval_model(loader):\n device = xm.xla_device()\n xla_resnet18 = torchvision.models.resnet18().to(device)\n xla_resnet18.eval()\n dynamo_resnet18 = torch.compile(\n xla_resnet18, backend='torchxla_trace_once')\n for data, _ in loader:\n output = dynamo_resnet18(data)", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "With `torch.compile` PyTorch/XLA only traces the ResNet18 model once during the init time and executes the compiled binary everytime `dynamo_resnet18` is invoked, instead of tracing the model every step. To illustrate the benefits of Dynamo+XLA, below is an inference speedup analysis to compare Dynamo and LazyTensor (without Dynamo) using TorchBench on a Cloud TPU v4-8 where the y-axis is the speedup multiplier.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "{:width=\"100%\"}", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "Dynamo for training is in the development stage with its implementation being at an earlier stage than inference. Developers are welcome to test this early feature, however, in the 2.0 release, PyTorch/XLA supports the forward and backward pass graphs and not the optimizer graph; the optimizer graph is available in the nightly builds and will land in the PyTorch/XLA 2.1 release. Below is an example of what training looks like using the ResNet18 example with `torch.compile`:", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "```\nimport torch\nimport torchvision\nimport torch_xla.core.xla_model as xm\n\ndef train_model(model, data, target):\n loss_fn = torch.nn.CrossEntropyLoss()\n pred = model(data)\n loss = loss_fn(pred, target)\n loss.backward()\n return pred", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "def train_model_main(loader):\n device = xm.xla_device()\n xla_resnet18 = torchvision.models.resnet18().to(device)\n xla_resnet18.train()\n dynamo_train_model = torch.compile(\n train_model, backend='aot_torchxla_trace_once')\n for data, target in loader:\n output = dynamo_train_model(xla_resnet18, data, target)", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "Note that the backend for training is `aot_torchxla_trace_once` (API will be updated for stable release) whereas the inference backend is `torchxla_trace_once` (name subject to change). We expect to extract and execute 3 graphs per training step instead of 1 training step if you use the Lazy tensor. Below is a training speedup analysis to compare Dynamo and Lazy using the TorchBench on Cloud TPU v4-8.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "{:width=\"100%\"}", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "PJRT Runtime (Beta)\n\nPyTorch/XLA is migrating from XRT to the new PJRT runtime. PJRT is a better-maintained stack, with demonstrated performance advantages, including, on average, a 35% performance for training on TorchBench 2.0 models. It also supports a richer set of features enabling technologies like SPMD. In the PyTorch/XLA 2.0 release, PJRT is the default runtime for TPU and CPU; GPU support is in experimental state. The PJRT features included in the PyTorch/XLA 2.0 release are:", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "* TPU runtime implementation in `libtpu` using the [PJRT Plugin API](https://github.com/openxla/community/blob/main/rfcs/20230123-pjrt-plugin.md#rfc-openxla-pjrt-plugin) improves performance by up to 30%\n* `torch.distributed` support for TPU v2 and v3, including `pjrt://` `init_method` (Experimental)\n* Single-host GPU support. Multi-host support coming soon. (Experimental)", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "Switching to PJRT requires no change (or minimal change for GPUs) to user code (see [pjrt.md](https://github.com/pytorch/xla/blob/master/docs/pjrt.md) for more details). Runtime configuration is as simple as setting the `PJRT_DEVICE` environment variable to the local device type (i.e. `TPU`, `GPU`, `CPU`). Below are examples of using PJRT runtimes on different devices. \n\n```\n# TPU Device\nPJRT_DEVICE=TPU python3 xla/test/test_train_mp_imagenet.py --fake_data --batch_size=256 --num_epochs=1", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "```\n# TPU Pod Device\ngcloud alpha compute tpus tpu-vm ssh $USER-pjrt --zone=us-central2-b --project=$PROJECT --worker=all --command=\"git clone --depth=1 --branch r2.0 https://github.com/pytorch/xla.git\"\n\ngcloud alpha compute tpus tpu-vm ssh $USER-pjrt --zone=us-central2-b --project=$PROJECT --worker=all --command=\"PJRT_DEVICE=TPU python3 xla/test/test_train_mp_imagenet.py --fake_data --batch_size=256 --num_epochs=1\"", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "```\n# GPU Device (Experimental)\nPJRT_DEVICE=GPU GPU_NUM_DEVICES=4 python3 xla/test/test_train_mp_imagenet.py --fake_data --batch_size=128 --num_epochs=1\n```\n\nBelow is a performance comparison between XRT and PJRT by task on TorchBench 2.0 on v4-8 TPU. To learn more about PJRT vs. XRT please review the [documentation](https://github.com/pytorch/xla/blob/r2.0/docs/pjrt.md#tpu).\n\n\n{:width=\"100%\"}", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "Parallelization", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "GSPMD (Experimental)", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "We are delighted to introduce General and Scalable Parallelization for ML Computation Graphs ([GSPMD](https://arxiv.org/abs/2105.04663)) in PyTorch as a new experimental data & model sharding solution. [GSPMD](https://arxiv.org/abs/2105.04663) provides automatic parallelization for common ML workloads, allowing developers to write PyTorch programs as if on a single large device and without custom sharded computation ops and/or collective communication ops. The XLA compiler transforms the single device", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "program into a partitioned one with proper collectives, based on the user provided sharding hints. The API ([RFC](https://github.com/pytorch/xla/issues/3871)) will be available in the PyTorch/XLA 2.0 release as an experimental feature on a single TPU VM host.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "Next Steps for GSPMD\n\nGSPMD is experimental in 2.0 release. To bring it to Stable status, we plan to address a number of feature gaps and known issues in the following releases, including multi-host support, DTensor integration, partial replication sharding, asynchronous data loading, and checkpointing.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "FSDP (Beta)", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch/XLA [introduced](https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/) fully sharded data parallel (FSDP) experimental support in version 1.12. This feature is a parallel representation of PyTorch FSDP and there are subtle differences in how XLA and upstream CUDA kernels are set up. `auto_wrap_policy` is a new argument that enables developers to automatically specify conditions for propagating partitioning specifications to neural network submodules. `auto_wrap_policy`s may be", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "simply passed in as an argument when wrapping a model with FSDP. Two `auto_wrap_policy` callables worth noting are: `size_based_auto_wrap_policy`, `transformer_auto_wrap_policy`.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "`size_based_auto_wrap_policy` enables users to wrap submodules with a minimum number of parameters. The example below wraps model submodules having at least 10M parameters.\n\n```\nauto_wrap_policy = partial(size_based_auto_wrap_policy, min_num_params=1e7)", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "`transformer_auto_wrap_policy` enables users to wrap all submodules that match a specific layer type. The example below wraps model submodules named `torch.nn.Conv2d`. To learn more, review [this ResNet example](https://github.com/pytorch/xla/blob/master/test/test_train_mp_imagenet_fsdp.py#L237-L255) by Ronghang Hu.\n\n```\nauto_wrap_policy = partial(transformer_auto_wrap_policy, transformer_layer_cls={torch.nn.Conv2d})", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
+{"page_content": "Today, we are excited to share our latest work for [PyTorch/XLA 2.0](https://github.com/pytorch/xla/releases/tag/v2.0.0). The release of [PyTorch 2.0](https://pytorch.org/get-started/pytorch-2.0/) is yet another major milestone for this storied community and we are excited to continue to be part of it. When the [PyTorch/XLA](https://github.com/pytorch/xla) project started in 2018 between Google and Meta, the focus was on bringing cutting edge Cloud TPUs to help support the PyTorch community. Along the way, others in the community such as Amazon joined the project and very quickly the community expanded. We are excited about XLA's [direction](https://opensource.googleblog.com/2023/03/openxla-is-ready-to-accelerate-and-simplify-ml-development.html) and the benefits this project continues to bring to the PyTorch community. In this blog we\u2019d like to showcase some key features that have been in development, show code snippets, and illustrate the benefit through some benchmarks.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
+{"page_content": "## TorchDynamo / torch.compile (Experimental)\n\n[TorchDynamo](https://github.com/pytorch/torchdynamo) (Dynamo) is a Python-level JIT compiler designed to make unmodified PyTorch programs faster. It provides a clean API for compiler backends to hook in; its biggest feature is to dynamically modify Python bytecode just before execution. In the PyTorch/XLA 2.0 release, an experimental backend for Dynamo is provided for both inference and training. \n\nDynamo provides a [Torch FX](https://pytorch.org/docs/stable/fx.html) (FX) graph when it recognizes a model pattern and PyTorch/XLA uses a Lazy Tensor approach to compile the FX graph and return the compiled function. To get more insight regarding the technical details about PyTorch/XLA\u2019s dynamo implementation, check out [this](https://dev-discuss.pytorch.org/t/torchdynamo-update-10-integrating-with-pytorch-xla-for-inference-and-training/935) dev-discuss post and [dynamo doc](https://github.com/pytorch/xla/blob/r2.0/docs/dynamo.md).", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
+{"page_content": "Here is a small code example of running ResNet18 with `torch.compile`:\n\n```\nimport torch\nimport torchvision\nimport torch_xla.core.xla_model as xm\n\ndef eval_model(loader):\n device = xm.xla_device()\n xla_resnet18 = torchvision.models.resnet18().to(device)\n xla_resnet18.eval()\n dynamo_resnet18 = torch.compile(\n xla_resnet18, backend='torchxla_trace_once')\n for data, _ in loader:\n output = dynamo_resnet18(data)\n```\n\nWith `torch.compile` PyTorch/XLA only traces the ResNet18 model once during the init time and executes the compiled binary everytime `dynamo_resnet18` is invoked, instead of tracing the model every step. To illustrate the benefits of Dynamo+XLA, below is an inference speedup analysis to compare Dynamo and LazyTensor (without Dynamo) using TorchBench on a Cloud TPU v4-8 where the y-axis is the speedup multiplier.\n\n\n{:width=\"100%\"}", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
+{"page_content": "Dynamo for training is in the development stage with its implementation being at an earlier stage than inference. Developers are welcome to test this early feature, however, in the 2.0 release, PyTorch/XLA supports the forward and backward pass graphs and not the optimizer graph; the optimizer graph is available in the nightly builds and will land in the PyTorch/XLA 2.1 release. Below is an example of what training looks like using the ResNet18 example with `torch.compile`:\n\n```\nimport torch\nimport torchvision\nimport torch_xla.core.xla_model as xm\n\ndef train_model(model, data, target):\n loss_fn = torch.nn.CrossEntropyLoss()\n pred = model(data)\n loss = loss_fn(pred, target)\n loss.backward()\n return pred", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
+{"page_content": "def train_model_main(loader):\n device = xm.xla_device()\n xla_resnet18 = torchvision.models.resnet18().to(device)\n xla_resnet18.train()\n dynamo_train_model = torch.compile(\n train_model, backend='aot_torchxla_trace_once')\n for data, target in loader:\n output = dynamo_train_model(xla_resnet18, data, target)\n```\n\nNote that the backend for training is `aot_torchxla_trace_once` (API will be updated for stable release) whereas the inference backend is `torchxla_trace_once` (name subject to change). We expect to extract and execute 3 graphs per training step instead of 1 training step if you use the Lazy tensor. Below is a training speedup analysis to compare Dynamo and Lazy using the TorchBench on Cloud TPU v4-8.\n\n\n{:width=\"100%\"}\n\n\n\n## PJRT Runtime (Beta)", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
+{"page_content": "PyTorch/XLA is migrating from XRT to the new PJRT runtime. PJRT is a better-maintained stack, with demonstrated performance advantages, including, on average, a 35% performance for training on TorchBench 2.0 models. It also supports a richer set of features enabling technologies like SPMD. In the PyTorch/XLA 2.0 release, PJRT is the default runtime for TPU and CPU; GPU support is in experimental state. The PJRT features included in the PyTorch/XLA 2.0 release are:\n\n* TPU runtime implementation in `libtpu` using the [PJRT Plugin API](https://github.com/openxla/community/blob/main/rfcs/20230123-pjrt-plugin.md#rfc-openxla-pjrt-plugin) improves performance by up to 30%\n* `torch.distributed` support for TPU v2 and v3, including `pjrt://` `init_method` (Experimental)\n* Single-host GPU support. Multi-host support coming soon. (Experimental)", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
+{"page_content": "Switching to PJRT requires no change (or minimal change for GPUs) to user code (see [pjrt.md](https://github.com/pytorch/xla/blob/master/docs/pjrt.md) for more details). Runtime configuration is as simple as setting the `PJRT_DEVICE` environment variable to the local device type (i.e. `TPU`, `GPU`, `CPU`). Below are examples of using PJRT runtimes on different devices. \n\n```\n# TPU Device\nPJRT_DEVICE=TPU python3 xla/test/test_train_mp_imagenet.py --fake_data --batch_size=256 --num_epochs=1\n```\n\n```\n# TPU Pod Device\ngcloud alpha compute tpus tpu-vm ssh $USER-pjrt --zone=us-central2-b --project=$PROJECT --worker=all --command=\"git clone --depth=1 --branch r2.0 https://github.com/pytorch/xla.git\"\n\ngcloud alpha compute tpus tpu-vm ssh $USER-pjrt --zone=us-central2-b --project=$PROJECT --worker=all --command=\"PJRT_DEVICE=TPU python3 xla/test/test_train_mp_imagenet.py --fake_data --batch_size=256 --num_epochs=1\"\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
+{"page_content": "```\n# GPU Device (Experimental)\nPJRT_DEVICE=GPU GPU_NUM_DEVICES=4 python3 xla/test/test_train_mp_imagenet.py --fake_data --batch_size=128 --num_epochs=1\n```\n\nBelow is a performance comparison between XRT and PJRT by task on TorchBench 2.0 on v4-8 TPU. To learn more about PJRT vs. XRT please review the [documentation](https://github.com/pytorch/xla/blob/r2.0/docs/pjrt.md#tpu).\n\n\n{:width=\"100%\"}\n\n\n\n## Parallelization\n\n\n### GSPMD (Experimental)", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
+{"page_content": "We are delighted to introduce General and Scalable Parallelization for ML Computation Graphs ([GSPMD](https://arxiv.org/abs/2105.04663)) in PyTorch as a new experimental data & model sharding solution. [GSPMD](https://arxiv.org/abs/2105.04663) provides automatic parallelization for common ML workloads, allowing developers to write PyTorch programs as if on a single large device and without custom sharded computation ops and/or collective communication ops. The XLA compiler transforms the single device program into a partitioned one with proper collectives, based on the user provided sharding hints. The API ([RFC](https://github.com/pytorch/xla/issues/3871)) will be available in the PyTorch/XLA 2.0 release as an experimental feature on a single TPU VM host. \n\n\n#### Next Steps for GSPMD", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
+{"page_content": "GSPMD is experimental in 2.0 release. To bring it to Stable status, we plan to address a number of feature gaps and known issues in the following releases, including multi-host support, DTensor integration, partial replication sharding, asynchronous data loading, and checkpointing. \n\n\n### FSDP (Beta)\n\nPyTorch/XLA [introduced](https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/) fully sharded data parallel (FSDP) experimental support in version 1.12. This feature is a parallel representation of PyTorch FSDP and there are subtle differences in how XLA and upstream CUDA kernels are set up. `auto_wrap_policy` is a new argument that enables developers to automatically specify conditions for propagating partitioning specifications to neural network submodules. `auto_wrap_policy`s may be simply passed in as an argument when wrapping a model with FSDP. Two `auto_wrap_policy` callables worth noting are: `size_based_auto_wrap_policy`, `transformer_auto_wrap_policy`.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
+{"page_content": "`size_based_auto_wrap_policy` enables users to wrap submodules with a minimum number of parameters. The example below wraps model submodules having at least 10M parameters.\n\n```\nauto_wrap_policy = partial(size_based_auto_wrap_policy, min_num_params=1e7)\n```\n\n`transformer_auto_wrap_policy` enables users to wrap all submodules that match a specific layer type. The example below wraps model submodules named `torch.nn.Conv2d`. To learn more, review [this ResNet example](https://github.com/pytorch/xla/blob/master/test/test_train_mp_imagenet_fsdp.py#L237-L255) by Ronghang Hu.\n\n```\nauto_wrap_policy = partial(transformer_auto_wrap_policy, transformer_layer_cls={torch.nn.Conv2d})\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
{"page_content": "PyTorch/XLA FSDP is now integrated in HuggingFace trainer class ([PR](https://github.com/huggingface/transformers/pull/21406)) enabling users to train much larger models on PyTorch/XLA ([official Hugging Face documentation](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#pytorchxla-fully-sharded-data-parallel)). A 16B parameters GPT2 model trained on Cloud TPU v4-64 with this FSDP configuration achieved 39% hardware utilization.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "\n \n TPU Accelerator - Num Devices\n | \n v4-64\n | \n
\n \n GPT2 Parameter Count\n | \n 16B\n | \n
\n \n Layers Wrapped with FSDP\n | \n GPT2Block\n | \n
\n \n TFLOPs / Chip\n | \n 275\n | \n
\n \n PFLOPs / Step\n | \n 50\n | ", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "
\n \n Hardware Utilization\n | \n 39%\n | \n
\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "Differences Between FSDP & GSPMD\n\nFSDP is a data parallelism technique that reduces device memory footprint by storing model parameters, optimizer states, and gradients all sharded. Note that the actual computation is still local to the device and requires all-gathering the sharded model parameters for both forward and backward passes, hence the name \u201cdata parallel\u201d. FSDP is one of the newest additions to PyTorch/XLA to scale large model training.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "GSPMD on the other hand, is a general parallelization system that enables various types of parallelisms, including both data and model parallelisms. PyTorch/XLA provides a sharding annotation API and XLAShardedTensor abstraction, so a user can annotate any tensor with sharding specs in the PyTorch program. Developers don\u2019t need to manually implement sharded computations or inject collective communications ops to get it right. The XLA compiler does the work so that each computation can run in a distributed", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "manner on multiple devices.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "Examples & Preliminary Results\n\nTo learn about PyTorch/XLA parallelism sharding API, visit our [RFC](https://github.com/pytorch/xla/issues/3871) and see the [Sample Code](https://github.com/pytorch/xla/tree/r2.0/test/spmd) references. Below is a simple example to enable data and model parallelism.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "```\nmodel = SimpleLinear().to(xm.xla_device())\n# Sharding annotate the linear layer weights.\nxs.mark_sharding(model.fc1.weight, mesh, partition_spec)\n# Training loop\nmodel.train()\nfor step, (data, target) in enumerate(loader):\n optimizer.zero_grad()\n data = data.to(xm.xla_device())\n target = target.to(xm.xla_device())\n # Sharding annotate input data, we can shard any input\n # dimensions. Sharidng the batch dimension enables \n # data parallelism, sharding the feature dimension enables", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "# spatial partitioning.\n xs.mark_sharding(data, mesh, partition_spec)\n ouput = model(data)\n loss = loss_fn(output, target)\n optimizer.step()\n xm.mark_step()", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "The following graph highlights the memory efficiency benefits of PyTorch/XLA FSDP and SPMD on Cloud TPU v4-8 running ResNet50.\n\n\n{:width=\"100%\"}", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "Closing Thoughts\u2026", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "We are excited to bring these features to the PyTorch community, and this is really just the beginning. Areas like dynamic shapes, deeper support for OpenXLA and many others are in development and we plan to put out more blogs to dive into the details. PyTorch/XLA is developed fully open source and we invite you to join the community of developers by filing issues, submitting pull requests, and sending RFCs on [GitHub](github.com/pytorch/xla). You can try PyTorch/XLA on a variety of XLA devices including", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "TPUs and GPUs. [Here](https://colab.sandbox.google.com/github/pytorch/xla/blob/master/contrib/colab/getting-started.ipynb) is how to get started.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "Congratulations again to the PyTorch community on this milestone!\n\nCheers,\n\nThe PyTorch Team at Google", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"Compromised PyTorch-nightly dependency chain between December 25th and December 30th, 2022.\"\nauthor: The PyTorch Team\n---\n\nIf you installed PyTorch-nightly on Linux via pip between December 25, 2022 and December 30, 2022, please uninstall it and torchtriton immediately, and use the latest nightly binaries (newer than Dec 30th 2022).\n\n```bash\n$ pip3 uninstall -y torch torchvision torchaudio torchtriton\n$ pip3 cache purge", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch-nightly Linux packages installed via pip during that time installed a dependency, torchtriton, which was compromised on the Python Package Index (PyPI) code repository and ran a malicious binary. This is what is known as a supply chain attack and directly affects dependencies for packages that are hosted on public package indices.\n\n**NOTE:** Users of the PyTorch **stable** packages **are not** affected by this issue.", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}}
-{"page_content": "How to check if your Python environment is affected\n\nThe following command searches for the malicious binary in the torchtriton package (`PYTHON_SITE_PACKAGES/triton/runtime/triton`) and prints out whether your current Python environment is affected or not.", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}}
-{"page_content": "```bash\npython3 -c \"import pathlib;import importlib.util;s=importlib.util.find_spec('triton'); affected=any(x.name == 'triton' for x in (pathlib.Path(s.submodule_search_locations[0] if s is not None else '/' ) / 'runtime').glob('*'));print('You are {}affected'.format('' if affected else 'not '))\"", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}}
-{"page_content": "The malicious binary is executed when the triton package is imported, which requires explicit code to do and is not PyTorch\u2019s default behavior.", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}}
-{"page_content": "The Background", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}}
-{"page_content": "At around 4:40pm GMT on December 30 (Friday), we learned about a malicious dependency package (`torchtriton`) that was uploaded to the Python Package Index (PyPI) code repository with the same package name as the one we ship on the [PyTorch nightly package index](https://download.pytorch.org/whl/nightly). Since the [PyPI index takes precedence](https://github.com/pypa/pip/issues/8606), this malicious package was being installed instead of the version from our official repository. This design enables", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}}
-{"page_content": "somebody to register a package by the same name as one that exists in a third party index, and pip will install their version by default.", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}}
-{"page_content": "This malicious package has the same name `torchtriton` but added in code that uploads sensitive data from the machine.", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}}
-{"page_content": "What we know\n\ntorchtriton on PyPI contains a malicious triton binary which is installed at `PYTHON_SITE_PACKAGES/triton/runtime/triton`. Its SHA256 hash is listed below.\n\n`SHA256(triton)= 2385b29489cd9e35f92c072780f903ae2e517ed422eae67246ae50a5cc738a0e`\n\nThe binary\u2019s main function does the following:", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}}
-{"page_content": "- Get system information\n - nameservers from `/etc/resolv.conf`\n - hostname from `gethostname()`\n - current username from `getlogin()`\n - current working directory name from `getcwd()`\n - environment variables\n- Read the following files\n - `/etc/hosts`\n - `/etc/passwd`\n - The first 1,000 files in `$HOME/*`\n - `$HOME/.gitconfig`\n - `$HOME/.ssh/*`\n- Upload all of this information, including file contents, via encrypted DNS queries to the domain *.h4ck[.]cfd, using the DNS server wheezy[.]io", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}}
-{"page_content": "The binary\u2019s file upload functionality is limited to files less than 99,999 bytes in size. It also uploads only the first 1,000 files in $HOME (but all files < 99,999 bytes in the .ssh directory).", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}}
-{"page_content": "Steps taken towards mitigation", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}}
-{"page_content": "- torchtriton has been removed as a dependency for our nightly packages and replaced with pytorch-triton ([pytorch/pytorch#91539](https://github.com/pytorch/pytorch/pull/91539)) and a dummy package registered on PyPI (so that this issue doesn\u2019t repeat)\n- All nightly packages that depend on torchtriton have been removed from our package indices at https://download.pytorch.org until further notice", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}}
-{"page_content": "- We have reached out to the PyPI security team to get proper ownership of the `torchtriton` package on PyPI and to delete the malicious version", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'Running PyTorch Models on Jetson Nano'\nauthor: Jeff Tang, Hamid Shojanazeri, Geeta Chauhan\nfeatured-img: 'assets/images/pytorch-logo.jpg'\n---", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "Overview\nNVIDIA [Jetson Nano](https://developer.nvidia.com/embedded/jetson-nano-developer-kit), part of the [Jetson family of products](https://developer.nvidia.com/embedded/jetson-modules) or Jetson modules, is a small yet powerful Linux (Ubuntu) based embedded computer with 2/4GB GPU. With it, you can run many PyTorch models efficiently. This document summarizes our experience of running different deep learning models using 3 different mechanisms on Jetson Nano:", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "1. Jetson Inference the higher-level NVIDIA API that has built-in support for running most common computer vision models which can be transfer-learned with PyTorch on the Jetson platform.\n\n 2. TensorRT, an SDK for high-performance inference from NVIDIA that requires the conversion of a PyTorch model to ONNX, and then to the TensorRT engine file that the TensorRT runtime can run.\n\n 3. PyTorch with the direct PyTorch API `torch.nn` for inference.", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "Setting up Jetson Nano", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "After purchasing a Jetson Nano [here](https://developer.nvidia.com/buy-jetson?product=jetson_nano&location=US), simply follow the clear step-by-step [instructions](https://developer.nvidia.com/embedded/learn/get-started-jetson-nano-devkit) to download and write the Jetson Nano Developer Kit SD Card Image to a microSD card, and complete the setup. After the setup is done and the Nano is booted, you\u2019ll see the standard Linux prompt along with the username and the Nano name used in the setup.", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "To check the GPU status on Nano, run the following commands:\n\n```\nsudo pip3 install jetson-stats\nsudo jtop", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "You\u2019ll see information, including:\n\n\n

\n
\n\nYou can also see the installed CUDA version:\n\n```\n$ ls -lt /usr/local\nlrwxrwxrwx 1 root root 22 Aug 2 01:47 cuda -> /etc/alternatives/cuda\nlrwxrwxrwx 1 root root 25 Aug 2 01:47 cuda-10 -> /etc/alternatives/cuda-10\ndrwxr-xr-x 12 root root 4096 Aug 2 01:47 cuda-10.2", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "To use a camera on Jetson Nano, for example, Arducam 8MP IMX219, follow the instructions [here](https://www.arducam.com/docs/camera-for-jetson-nano/mipi-camera-modules-for-jetson-nano/driver-installation/) or run the commands below after [installing a camera module](https://developer.nvidia.com/embedded/learn/jetson-nano-2gb-devkit-user-guide#id-.JetsonNano2GBDeveloperKitUserGuidevbatuu_v1.0-Camera):", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "```\ncd ~\nwget https://github.com/ArduCAM/MIPI_Camera/releases/download/v0.0.3/install_full.sh\nchmod +x install_full.sh\n./install_full.sh -m arducam", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "Another way to do this is to use the original Jetson Nano camera driver:\n\n```\nsudo dpkg -r arducam-nvidia-l4t-kernel\nsudo shutdown -r now\n```\n\nThen, use ls /dev/video0 to confirm the camera is found:\n\n```\n$ ls /dev/video0\n/dev/video0\n```\n\nAnd finally, the following command to see the camera in action:\n\n```\nnvgstcapture-1.0 --orientation=2\n```", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "Using Jetson Inference\nNVIDIA [Jetson Inference](https://github.com/dusty-nv/jetson-inference) API offers the easiest way to run image recognition, object detection, semantic segmentation, and pose estimation models on Jetson Nano. Jetson Inference has TensorRT built-in, so it\u2019s very fast. \n\nTo test run Jetson Inference, first clone the repo and download the models:\n\n```\ngit clone --recursive https://github.com/dusty-nv/jetson-inference\ncd jetson-inference", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "Then use the pre-built [Docker Container](https://github.com/dusty-nv/jetson-inference/blob/master/docs/jetpack-setup-2.md) that already has PyTorch installed to test run the models:\n\n```\ndocker/run.sh --volume ~/jetson_inference:/jetson_inference", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "To run image recognition, object detection, semantic segmentation, and pose estimation models on test images, use the following:\n\n```\ncd build/aarch64/bin\n./imagenet.py images/jellyfish.jpg /jetson_inference/jellyfish.jpg\n./segnet.py images/dog.jpg /jetson_inference/dog.jpeg\n./detectnet.py images/peds_0.jpg /jetson_inference/peds_0.jpg\n./posenet.py images/humans_0.jpg /jetson_inference/pose_humans_0.jpg", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "Four result images from running the four different models will be generated. Exit the docker image to see them:\n\n```\n$ ls -lt ~/jetson_inference/\n-rw-r--r-- 1 root root 68834 Oct 15 21:30 pose_humans_0.jpg\n-rw-r--r-- 1 root root 914058 Oct 15 21:30 peds_0.jpg\n-rw-r--r-- 1 root root 666239 Oct 15 21:30 dog.jpeg\n-rw-r--r-- 1 root root 179760 Oct 15 21:29 jellyfish.jpg", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n

\n
", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n

\n
\n\nYou can also use the docker image to run PyTorch models because the image has PyTorch, torchvision and torchaudio installed:", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "```\n# pip list|grep torch\ntorch (1.9.0)\ntorchaudio (0.9.0a0+33b2469)\ntorchvision (0.10.0a0+300a8a4)", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "Although Jetson Inference includes models already converted to the TensorRT engine file format, you can fine-tune the models by following the steps in Transfer Learning with PyTorch (for Jetson Inference) [here](https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-transfer-learning.md).", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "Using TensorRT\n[TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/) is an SDK for high-performance inference from NVIDIA. Jetson Nano supports TensorRT via the Jetpack SDK, included in the SD Card image used to set up Jetson Nano. To confirm that TensorRT is already installed in Nano, `run dpkg -l|grep -i tensorrt`:\n\n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "Theoretically, TensorRT can be used to \u201ctake a trained PyTorch model and optimize it to run more efficiently during inference on an NVIDIA GPU.\u201d Follow the instructions and code in the [notebook](https://github.com/NVIDIA/TensorRT/blob/master/quickstart/IntroNotebooks/4.%20Using%20PyTorch%20through%20ONNX.ipynb) to see how to use PyTorch with TensorRT through ONNX on a torchvision Resnet50 model:\n\n1. How to convert the model from PyTorch to ONNX;\n\n2. How to convert the ONNX model to a TensorRT engine file;", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "3. How to run the engine file with the TensorRT runtime for performance improvement: inference time improved from the original 31.5ms/19.4ms (FP32/FP16 precision) to 6.28ms (TensorRT).", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "You can replace the Resnet50 model in the notebook code with another PyTorch model, go through the conversion process above, and run the finally converted model TensorRT engine file with the TensorRT runtime to see the optimized performance. But be aware that due to the Nano GPU memory size, models larger than 100MB are likely to fail to run, with the following error information:\n\n`Error Code 1: Cuda Runtime (all CUDA-capable devices are busy or unavailable)`", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "You may also see an error when converting a PyTorch model to ONNX model, which may be fixed by replacing: \n\n`torch.onnx.export(resnet50, dummy_input, \"resnet50_pytorch.onnx\", verbose=False)`\n\nwith:\n\n`torch.onnx.export(model, dummy_input, \"deeplabv3_pytorch.onnx\", opset_version=11, verbose=False)`", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "Using PyTorch \nFirst, to download and install PyTorch 1.9 on Nano, run the following commands (see [here](https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048) for more information):", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "```\nwget https://nvidia.box.com/shared/static/p57jwntv436lfrd78inwl7iml6p13fzh.whl -O torch-1.8.0-cp36-cp36m-linux_aarch64.whl -O torch-1.9.0-cp36-cp36m-linux_aarch64.whl\nsudo apt-get install python3-pip libopenblas-base libopenmpi-dev \npip3 install Cython\npip3 install numpy torch-1.9.0-cp36-cp36m-linux_aarch64.whl", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "To download and install torchvision 0.10 on Nano, run the commands below:\n\n```\nhttps://drive.google.com/uc?id=1tU6YlPjrP605j4z8PMnqwCSoP6sSC91Z\npip3 install torchvision-0.10.0a0+300a8a4-cp36-cp36m-linux_aarch64.whl\n```\n\nAfter the steps above, run this to confirm:\n```\n$ pip3 list|grep torch\ntorch (1.9.0)\ntorchvision (0.10.0)", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "You can also use the docker image described in the section *Using Jetson Inference* (which also has PyTorch and torchvision installed), to skip the manual steps above.\n\nThe official [YOLOv5](https://github.com/ultralytics/yolov5) repo is used to run the PyTorch YOLOv5 model on Jetson Nano. After logging in to Jetson Nano, follow the steps below:\n\n* Get the repo and install what\u2019s required:\n\n```\ngit clone https://github.com/ultralytics/yolov5\ncd yolov5\npip install -r requirements.txt", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "* Run `python3 detect.py`, which by default uses the PyTorch yolov5s.pt model. You should see something like:", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "```\ndetect: weights=yolov5s.pt, source=data/images, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False\nYOLOv5 \ud83d\ude80 v5.0-499-g48b00db torch 1.9.0 CUDA:0 (NVIDIA Tegra X1, 3956.1015625MB)", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "Fusing layers... \nModel Summary: 224 layers, 7266973 parameters, 0 gradients\nimage 1/5 /home/jeff/repos/yolov5-new/yolov5/data/images/bus.jpg: 640x480 4 persons, 1 bus, 1 fire hydrant, Done. (0.142s)\n...", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "**The inference time on Jetson Nano GPU is about 140ms, more than twice as fast as the inference time on iOS or Android (about 330ms).**\n\nIf you get an error `\u201cImportError: The _imagingft C module is not installed.\u201d` then you need to reinstall pillow:\n```\nsudo apt-get install libpng-dev\nsudo apt-get install libfreetype6-dev\npip3 uninstall pillow\npip3 install --no-cache-dir pillow", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "After successfully completing the `python3 detect.py` run, the object detection results of the test images located in `data/images` will be in the `runs/detect/exp` directory. To test the detection with a live webcam instead of local images, use the `--source 0` parameter when running `python3 detect.py`):", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "```\n~/repos/yolov5$ ls -lt runs/detect/exp10\ntotal 1456\n-rw-rw-r-- 1 jeff jeff 254895 Oct 15 16:12 zidane.jpg\n-rw-rw-r-- 1 jeff jeff 202674 Oct 15 16:12 test3.png\n-rw-rw-r-- 1 jeff jeff 217117 Oct 15 16:12 test2.jpg\n-rw-rw-r-- 1 jeff jeff 305826 Oct 15 16:12 test1.png\n-rw-rw-r-- 1 jeff jeff 495760 Oct 15 16:12 bus.jpg", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "Using the same test files used in the PyTorch iOS YOLOv5 demo app or Android YOLOv5 demo app, you can compare the results generated with running the YOLOv5 PyTorch model on mobile devices and Jetson Nano:", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n

\n
\nFigure 1. PyTorch YOLOv5 on Jetson Nano.", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n

\n
\nFigure 2. PyTorch YOLOv5 on iOS.", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n

\n
\nFigure 3. PyTorch YOLOv5 on Android.", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "Summary\nBased on our experience of running different PyTorch models for potential demo apps on Jetson Nano, we see that even Jetson Nano, a lower-end of the Jetson family of products, provides a powerful GPU and embedded system that can directly run some of the latest PyTorch models, pre-trained or transfer learned, efficiently.", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "Building PyTorch demo apps on Jetson Nano can be similar to building PyTorch apps on Linux, but you can also choose to use TensorRT after converting the PyTorch models to the TensorRT engine file format.\n\nBut if you just need to run some common computer vision models on Jetson Nano using NVIDIA\u2019s Jetson Inference which supports image recognition, object detection, semantic segmentation, and pose estimation models, then this is the easiest way.", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "References\nTorch-TensorRT, a compiler for PyTorch via TensorRT:\n[https://github.com/NVIDIA/Torch-TensorRT/](https://github.com/NVIDIA/Torch-TensorRT/)\n\nJetson Inference docker image details:\n[https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-docker.md](https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-docker.md)", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "A guide to using TensorRT on the NVIDIA Jetson Nano:\n[https://docs.donkeycar.com/guide/robot_sbc/tensorrt_jetson_nano/](https://docs.donkeycar.com/guide/robot_sbc/tensorrt_jetson_nano/) \nincluding:\n\n1. Use Jetson as a portable GPU device to run an NN chess engine model: \n[https://medium.com/@ezchess/jetson-lc0-running-leela-chess-zero-on-nvidia-jetson-a-portable-gpu-device-a213afc9c018](https://medium.com/@ezchess/jetson-lc0-running-leela-chess-zero-on-nvidia-jetson-a-portable-gpu-device-a213afc9c018)", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "2. A MaskEraser app using PyTorch and torchvision, installed directly with pip:\n[https://github.com/INTEC-ATI/MaskEraser#install-pytorch](https://github.com/INTEC-ATI/MaskEraser#install-pytorch)", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch 1.9 Release, including torch.linalg and Mobile Interpreter'\nauthor: Team PyTorch \n---", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "We are excited to announce the release of PyTorch 1.9. The release is composed of more than 3,400 commits since 1.8, made by 398 contributors. The release notes are available [here](https://github.com/pytorch/pytorch/releases). Highlights include:\n1. Major improvements to support scientific computing, including *torch.linalg*, *torch.special*, and Complex Autograd\n2. Major improvements in on-device binary size with Mobile Interpreter", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "3. Native support for elastic-fault tolerance training through the upstreaming of TorchElastic into PyTorch Core\n4. Major updates to the PyTorch RPC framework to support large scale distributed training with GPU support\n5. New APIs to optimize performance and packaging for model inference deployment \n6. Support for Distributed training, GPU utilization and SM efficiency in the PyTorch Profiler", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Along with 1.9, we are also releasing major updates to the PyTorch libraries, which you can read about in [this blog post](https://pytorch.org/blog/pytorch-1.9-new-library-releases/). \n\nWe\u2019d like to thank the community for their support and work on this latest release. We\u2019d especially like to thank Quansight and Microsoft for their contributions.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Features in PyTorch releases are classified as Stable, Beta, and Prototype. You can learn more about the definitions in [this blog post](https://pytorch.org/blog/pytorch-feature-classification-changes/). \n\n# Frontend APIs", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "(Stable) *torch.linalg*", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "In 1.9, the *torch.linalg* module is moving to a stable release. Linear algebra is essential to deep learning and scientific computing, and the *torch.linalg* module extends PyTorch\u2019s support for it with implementations of every function from [NumPy\u2019s linear algebra module](https://numpy.org/doc/stable/reference/routines.linalg.html) (now with support for accelerators and autograd) and more, like", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "[*torch.linalg.matrix_norm*](https://pytorch.org/docs/1.9.0/generated/torch.linalg.matrix_norm.html?highlight=matrix_norm#torch.linalg.matrix_norm) and [*torch.linalg.householder_product*](https://pytorch.org/docs/1.9.0/generated/torch.linalg.householder_product.html?highlight=householder_product#torch.linalg.householder_product). This makes the module immediately familiar to users who have worked with NumPy. Refer to [the", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "documentation](https://pytorch.org/docs/1.9.0/linalg.html?highlight=linalg#module-torch.linalg) here.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "We plan to publish another blog post with more details on the *torch.linalg* module next week!", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "(Stable) Complex Autograd \n\nThe Complex Autograd feature, released as a beta in PyTorch 1.8, is now stable. Since the beta release, we have extended support for Complex Autograd for over 98% operators in PyTorch 1.9, improved testing for complex operators by adding more OpInfos, and added greater validation through TorchAudio migration to native complex tensors (refer to [this issue](https://github.com/pytorch/audio/issues/1337)).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "This feature provides users the functionality to calculate complex gradients and optimize real valued loss functions with complex variables. This is a required feature for multiple current and downstream prospective users of complex numbers in PyTorch like TorchAudio, ESPNet, Asteroid, and FastMRI. Refer to [the documentation](https://pytorch.org/docs/1.9.0/notes/autograd.html#autograd-for-complex-numbers) for more details.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "(Stable) torch.use_deterministic_algorithms() \n\nTo help with debugging and writing reproducible programs, PyTorch 1.9 includes a *torch.use_determinstic_algorithms* option. When this setting is enabled, operations will behave deterministically, if possible, or throw a runtime error if they might behave nondeterministically. Here are a couple examples:\n\n```python\n>>> a = torch.randn(100, 100, 100, device='cuda').to_sparse()\n>>> b = torch.randn(100, 100, 100, device='cuda')", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "# Sparse-dense CUDA bmm is usually nondeterministic\n>>> torch.bmm(a, b).eq(torch.bmm(a, b)).all().item()\nFalse\n\n>>> torch.use_deterministic_algorithms(True)\n\n# Now torch.bmm gives the same result each time, but with reduced performance\n>>> torch.bmm(a, b).eq(torch.bmm(a, b)).all().item()\nTrue\n\n# CUDA kthvalue has no deterministic algorithm, so it throws a runtime error\n>>> torch.zeros(10000, device='cuda').kthvalue(1)\nRuntimeError: kthvalue CUDA does not have a deterministic implementation...", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch 1.9 adds deterministic implementations for a number of indexing operations, too, including *index_add*, *index_copy*, and *index_put with accum=False*. For more details, refer to the [documentation](https://pytorch.org/docs/1.9.0/generated/torch.use_deterministic_algorithms.html?highlight=use_deterministic#torch.use_deterministic_algorithms) and [reproducibility note](https://pytorch.org/docs/1.9.0/notes/randomness.html?highlight=reproducibility).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) *torch.special*\n\nA *torch.special* module, analogous to [SciPy\u2019s special module](https://docs.scipy.org/doc/scipy/reference/special.html), is now available in beta. This module contains many functions useful for scientific computing and working with distributions such as *iv*, *ive*, *erfcx*, *logerfc*, and *logerfcx*. Refer to [the documentation](https://pytorch.org/docs/master/special.html) for more details.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) nn.Module parameterization \n\n```nn.Module``` parameterization allows users to parametrize any parameter or buffer of an ```nn.Module``` without modifying the ```nn.Module``` itself. It allows you to constrain the space in which your parameters live without the need for special optimization methods.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "This also contains a new implementation of the ```spectral_norm``` parametrization for PyTorch 1.9. More parametrization will be added to this feature (weight_norm, matrix constraints and part of pruning) for the feature to become stable in 1.10. For more details, refer to the [documentation](https://pytorch.org/docs/1.9.0/generated/torch.nn.utils.parametrizations.spectral_norm.html?highlight=parametrize) and [tutorial](https://pytorch.org/tutorials/intermediate/parametrizations.html).\n\n# PyTorch Mobile", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) Mobile Interpreter \n\nWe are releasing Mobile Interpreter, a streamlined version of the PyTorch runtime, in beta. The Interpreter will execute PyTorch programs in edge devices, with reduced binary size footprint.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Mobile Interpreter is one of the top requested features for PyTorch Mobile. This new release will significantly reduce binary size compared with the current on-device runtime. In order for you to get the binary size improvements with our interpreter (which can reduce the binary size up to ~75% for a typical application) follow these instructions. As an example, using Mobile Interpreter, we can reach 2.6 MB compressed with MobileNetV2 in arm64-v7a Android. With this latest release we are making it much", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "simpler to integrate the interpreter by providing pre-built libraries for iOS and Android.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "TorchVision Library", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Starting from 1.9, users can use the TorchVision library on their iOS/Android apps. The Torchvision library contains the C++ TorchVision ops and needs to be linked together with the main PyTorch library for iOS, for Android it can be added as a gradle dependency. This allows using TorchVision prebuilt MaskRCNN operators for object detections and segmentation. To learn more about the library, please refer to our tutorials and [demo apps](https://github.com/pytorch/android-demo-app/tree/master/D2Go).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Demo apps", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "We are releasing a new video app based on [PyTorch Video](https://pytorchvideo.org/) library and an updated speech recognition app based on the latest torchaudio, wave2vec model. Both are available on [iOS](https://github.com/pytorch/ios-demo-app) and [Android](https://github.com/pytorch/android-demo-app). In addition, we have updated the seven Computer Vision and three Natural Language Processing demo apps, including the HuggingFace DistilBERT, and the DeiT vision transformer models, with PyTorch Mobile", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "v1.9. With the addition of these two apps, we now offer a full suite of demo apps covering image, text, audio, and video. To get started check out our [iOS demo apps](https://github.com/pytorch/ios-demo-app) and [Android demo apps](https://github.com/pytorch/android-demo-app).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
\n\n# Distributed Training", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) TorchElastic is now part of core", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "[TorchElastic](https://github.com/pytorch/pytorch/issues/50621), which was open sourced over a year ago in the [pytorch/elastic](https://github.com/pytorch/elastic) github repository, is a runner and coordinator for PyTorch worker processes. Since then, it has been adopted by various distributed torch use-cases: 1) [deepspeech.pytorch](https://medium.com/pytorch/training-deepspeech-using-torchelastic-ad013539682) 2)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "[pytorch-lightning](https://pytorch-lightning.readthedocs.io/en/stable/advanced/multi_gpu.html#torchelastic) 3) [Kubernetes CRD](https://github.com/pytorch/elastic/blob/master/kubernetes/README.md). Now, it is part of PyTorch core.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "As its name suggests, the core function of TorcheElastic is to gracefully handle scaling events. A notable corollary of elasticity is that peer discovery and rank assignment are built into TorchElastic enabling users to run distributed training on preemptible instances without requiring a gang scheduler. As a side note, [etcd](https://etcd.io/) used to be a hard dependency of TorchElastic. With the upstream, this is no longer the case since we have added a \u201cstandalone\u201d rendezvous based on c10d::Store. For", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "more details, refer to the [documentation](https://pytorch.org/docs/1.9.0/distributed.elastic.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) Distributed Training Updates\n\nIn addition to TorchElastic, there are a number of beta features available in the distributed package:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "* **(Beta) CUDA support is available in RPC**: Compared to CPU RPC and general-purpose RPC frameworks, CUDA RPC is a much more efficient way for P2P Tensor communication. It is built on top of TensorPipe which can automatically choose a communication channel for each Tensor based on Tensor device type and channel availability on both the caller and the callee. Existing TensorPipe channels cover NVLink, InfiniBand, SHM, CMA, TCP, etc. See [this recipe](https://pytorch.org/tutorials/recipes/cuda_rpc.html) for", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "how CUDA RPC helps to attain 34x speedup compared to CPU RPC.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "* **(Beta) ZeroRedundancyOptimizer**: ZeroRedundancyOptimizer can be used in conjunction with DistributedDataParallel to reduce the size of per-process optimizer states. The idea of ZeroRedundancyOptimizer comes from [DeepSpeed/ZeRO project](https://github.com/microsoft/DeepSpeed) and [Marian](https://github.com/marian-nmt/marian-dev), where the optimizer in each process owns a shard of model parameters and their corresponding optimizer states. When running `step()`, each optimizer only updates its own", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "parameters, and then uses collective communication to synchronize updated parameters across all processes. Refer to [this documentation](https://pytorch.org/docs/master/distributed.optim.html) and this [tutorial](https://pytorch.org/tutorials/recipes/zero_redundancy_optimizer.html) to learn more.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "* **(Beta) Support for profiling distributed collectives**: PyTorch\u2019s profiler tools, *torch.profiler* and *torch.autograd.profiler*, are able to profile distributed collectives and point to point communication primitives including allreduce, alltoall, allgather, send/recv, etc. This is enabled for all backends supported natively by PyTorch: gloo, mpi, and nccl. This can be used to debug performance issues, analyze traces that contain distributed communication, and gain insight into performance of", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "applications that use distributed training. To learn more, refer to [this documentation](https://pytorch.org/docs/1.9.0/distributed.html#profiling-collective-communication).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "# Performance Optimization and Tooling", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "(Stable) Freezing API \n\nModule Freezing is the process of inlining module parameters and attributes values as constants into the TorchScript internal representation. This allows further optimization and specialization of your program, both for TorchScript optimizations and lowering to other backends. It is used by [optimize_for_mobile API](https://github.com/pytorch/pytorch/blob/master/torch/utils/mobile_optimizer.py), ONNX, and others.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Freezing is recommended for model deployment. It helps TorchScript JIT optimizations optimize away overhead and bookkeeping that is necessary for training, tuning, or debugging PyTorch models. It enables graph fusions that are not semantically valid on non-frozen graphs - such as fusing Conv-BN. For more details, refer to the [documentation](https://pytorch.org/docs/1.9.0/generated/torch.jit.freeze.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) PyTorch Profiler \n\n\n

\n
\n\nThe new PyTorch Profiler graduates to beta and leverages [Kineto](https://github.com/pytorch/kineto/) for GPU profiling, TensorBoard for visualization and is now the standard across our tutorials and documentation.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch 1.9 extends support for the new *torch.profiler* API to more builds, including Windows and Mac and is recommended in most cases instead of the previous *torch.autograd.profiler* API. The new API supports existing profiler features, integrates with CUPTI library (Linux-only) to trace on-device CUDA kernels and provides support for long-running jobs, e.g.:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "```python\ndef trace_handler(p):\n output = p.key_averages().table(sort_by=\"self_cuda_time_total\", row_limit=10)\n print(output)\n p.export_chrome_trace(\"/tmp/trace_\" + str(p.step_num) + \".json\")", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "with profile(\n activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],\n # schedule argument specifies the iterations on which the profiler is active\n schedule=torch.profiler.schedule(\n wait=1,\n warmup=1,\n active=2),\n # on_trace_ready argument specifies the handler for the traces\n on_trace_ready=trace_handler\n) as p:\n for idx in range(8):\n model(inputs)\n # profiler will trace iterations 2 and 3, and then 6 and 7 (counting from zero)\n p.step()", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "More usage examples can be found on the [profiler recipe page](https://pytorch.org/tutorials/recipes/recipes/profiler_recipe.html). \n\nThe PyTorch Profiler Tensorboard plugin has new features for:\n* Distributed Training summary view with communications overview for NCCL\n* GPU Utilization and SM Efficiency in Trace view and GPU operators view\n* Memory Profiling view\n* Jump to source when launched from Microsoft VSCode\n* Ability for load traces from cloud object storage systems", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) Inference Mode API", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Inference Mode API allows significant speed-up for inference workloads while remaining safe and ensuring no incorrect gradients can ever be computed. It offers the best possible performance when no autograd is required. For more details, refer to [the documentation for inference mode itself](https://pytorch.org/docs/1.9.0/generated/torch.inference_mode.html?highlight=inference%20mode#torch.inference_mode) and [the documentation explaining when to use it and the difference with no_grad", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "mode](https://pytorch.org/docs/1.9.0/notes/autograd.html#locally-disabling-gradient-computation).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) *torch.package*", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "*torch.package* is a new way to package PyTorch models in a self-contained, stable format. A package will include both the model\u2019s data (e.g. parameters, buffers) and its code (model architecture). Packaging a model with its full set of Python dependencies, combined with a description of a conda environment with pinned versions, can be used to easily reproduce training. Representing a model in a self-contained artifact will also allow it to be published and transferred throughout a production ML pipeline", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "while retaining the flexibility of a pure-Python representation. For more details, refer to [the documentation](https://pytorch.org/docs/1.9.0/package.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "(Prototype) prepare_for_inference", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "prepare_for_inference is a new prototype feature that takes in a module and performs graph-level optimizations to improve inference performance, depending on the device. It is meant to be a PyTorch-native option that requires minimal changes to user\u2019s workflows. For more details, see [the documentation](https://github.com/pytorch/pytorch/blob/master/torch/jit/_freeze.py#L168) for the Torchscript version [here](https://github.com/pytorch/pytorch/blob/master/torch/jit/_freeze.py#L168) or the FX version", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "[here](https://github.com/pytorch/pytorch/blob/master/torch/fx/experimental/optimization.py#L234).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "(Prototype) Profile-directed typing in TorchScript", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "TorchScript has a hard requirement for source code to have type annotations in order for compilation to be successful. For a long time, it was only possible to add missing or incorrect type annotations through trial and error (i.e., by fixing the type-checking errors generated by *torch.jit.script* one by one), which was inefficient and time consuming. Now, we have enabled profile directed typing for *torch.jit.script* by leveraging existing tools like MonkeyType, which makes the process much easier,", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "faster, and more efficient. For more details, refer to [the documentation](https://pytorch.org/docs/1.9.0/jit.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Thanks for reading. If you\u2019re interested in these updates and want to join the PyTorch community, we encourage you to join the [discussion forums](https://discuss.pytorch.org/) and [open GitHub issues](https://github.com/pytorch/pytorch/issues). To get the latest news from PyTorch, follow us on [Facebook](https://www.facebook.com/pytorch/), [Twitter](https://twitter.com/PyTorch), [Medium](https://medium.com/pytorch), [YouTube](https://www.youtube.com/pytorch), or", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "[LinkedIn](https://www.linkedin.com/company/pytorch).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Cheers!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'Announcing PyTorch Developer Day 2020'\nauthor: Team PyTorch\n---\n\nStarting this year, we plan to host two separate events for PyTorch: one for developers and users to discuss core technical development, ideas and roadmaps called **\u201cDeveloper Day\u201d**, and another for the PyTorch ecosystem and industry communities to showcase their work and discover opportunities to collaborate called **\u201cEcosystem Day\u201d** (scheduled for early 2021).", "metadata": {"source": "https://pytorch.org/blog/pytorch-developer-day-2020/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
\n\nThe **PyTorch Developer Day** (#PTD2) is kicking off on November 12, 2020, 8AM PST with a full day of technical talks on a variety of topics, including updates to the core framework, new tools and libraries to support development across a variety of domains. You'll also see talks covering the latest research around systems and tooling in ML.", "metadata": {"source": "https://pytorch.org/blog/pytorch-developer-day-2020/", "category": "pytorch blogs"}}
-{"page_content": "For Developer Day, we have an online networking event limited to people composed of PyTorch maintainers and contributors, long-time stakeholders and experts in areas relevant to PyTorch\u2019s future. Conversations from the networking event will strongly shape the future of PyTorch. Hence, invitations are required to attend the networking event.", "metadata": {"source": "https://pytorch.org/blog/pytorch-developer-day-2020/", "category": "pytorch blogs"}}
-{"page_content": "All talks will be livestreamed and available to the public.\n* [Livestream event page](https://www.facebook.com/events/802177440559164/)\n* [Apply for an invitation to the networking event](https://pytorchdeveloperday.fbreg.com/apply)\n\nVisit the [event website](https://pytorchdeveloperday.fbreg.com/) to learn more. We look forward to welcoming you to PyTorch Developer Day on November 12th! \n\nThank you,\n\nThe PyTorch team", "metadata": {"source": "https://pytorch.org/blog/pytorch-developer-day-2020/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch 1.7 released w/ CUDA 11, New APIs for FFTs, Windows support for Distributed training and more'\nauthor: Team PyTorch\n---", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "Today, we\u2019re announcing the availability of PyTorch 1.7, along with updated domain libraries. The PyTorch 1.7 release includes a number of new APIs including support for NumPy-Compatible FFT operations, profiling tools and major updates to both distributed data parallel (DDP) and remote procedure call (RPC) based distributed training. In addition, several features moved to [stable](https://pytorch.org/docs/stable/index.html#pytorch-documentation) including custom C++ Classes, the memory profiler, extensions", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "via custom tensor-like objects, user async functions in RPC and a number of other features in torch.distributed such as Per-RPC timeout, DDP dynamic bucketing and RRef helper.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "A few of the highlights include:\n* CUDA 11 is now officially supported with binaries available at [PyTorch.org](http://pytorch.org/)\n* Updates and additions to profiling and performance for RPC, TorchScript and Stack traces in the autograd profiler\n* (Beta) Support for NumPy compatible Fast Fourier transforms (FFT) via torch.fft\n* (Prototype) Support for Nvidia A100 generation GPUs and native TF32 format \n* (Prototype) Distributed training on Windows now supported\n* torchvision", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "* (Stable) Transforms now support Tensor inputs, batch computation, GPU, and TorchScript\n * (Stable) Native image I/O for JPEG and PNG formats\n * (Beta) New Video Reader API\n* torchaudio\n * (Stable) Added support for speech rec (wav2letter), text to speech (WaveRNN) and source separation (ConvTasNet)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "To reiterate, starting PyTorch 1.6, features are now classified as stable, beta and prototype. You can see the detailed announcement [here](https://pytorch.org/blog/pytorch-feature-classification-changes/). Note that the prototype features listed in this blog are available as part of this release. \n\nFind the full release notes [here](https://github.com/pytorch/pytorch/releases). \n\n# Front End APIs", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "[Beta] NumPy Compatible torch.fft module\nFFT-related functionality is commonly used in a variety of scientific fields like signal processing. While PyTorch has historically supported a few FFT-related functions, the 1.7 release adds a new torch.fft module that implements FFT-related functions with the same API as NumPy.\n\nThis new module must be imported to be used in the 1.7 release, since its name conflicts with the historic (and now deprecated) torch.fft function.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "**Example usage:**\n```python\n>>> import torch.fft\n>>> t = torch.arange(4)\n>>> t\ntensor([0, 1, 2, 3])\n\n>>> torch.fft.fft(t)\ntensor([ 6.+0.j, -2.+2.j, -2.+0.j, -2.-2.j])\n\n>>> t = tensor([0.+1.j, 2.+3.j, 4.+5.j, 6.+7.j])\n>>> torch.fft.fft(t)\ntensor([12.+16.j, -8.+0.j, -4.-4.j, 0.-8.j])", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "* [Documentation](https://pytorch.org/docs/stable/fft.html#torch-fft)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "[Beta] C++ Support for Transformer NN Modules\nSince [PyTorch 1.5](https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/), we\u2019ve continued to maintain parity between the python and C++ frontend APIs. This update allows developers to use the nn.transformer module abstraction from the C++ Frontend. And moreover, developers no longer need to save a module from python/JIT and load into C++ as it can now be used it in C++ directly.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "* [Documentation](https://pytorch.org/cppdocs/api/classtorch_1_1nn_1_1_transformer_impl.html#_CPPv4N5torch2nn15TransformerImplE)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "[Beta] torch.set_deterministic", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "Reproducibility (bit-for-bit determinism) may help identify errors when debugging or testing a program. To facilitate reproducibility, PyTorch 1.7 adds the ```torch.set_deterministic(bool)``` function that can direct PyTorch operators to select deterministic algorithms when available, and to throw a runtime error if an operation may result in nondeterministic behavior. By default, the flag this function controls is false and there is no change in behavior, meaning PyTorch may implement its operations", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "nondeterministically by default.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "More precisely, when this flag is true:\n* Operations known to not have a deterministic implementation throw a runtime error;\n* Operations with deterministic variants use those variants (usually with a performance penalty versus the non-deterministic version); and\n* ```torch.backends.cudnn.deterministic = True``` is set.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "Note that this is necessary, **but not sufficient**, for determinism **within a single run of a PyTorch program**. Other sources of randomness like random number generators, unknown operations, or asynchronous or distributed computation may still cause nondeterministic behavior.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "See the documentation for ```torch.set_deterministic(bool)``` for the list of affected operations.\n* [RFC](https://github.com/pytorch/pytorch/issues/15359)\n* [Documentation](https://pytorch.org/docs/stable/generated/torch.set_deterministic.html)\n\n# Performance & Profiling", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "[Beta] Stack traces added to profiler", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "Users can now see not only operator name/inputs in the profiler output table but also where the operator is in the code. The workflow requires very little change to take advantage of this capability. The user uses the [autograd profiler](https://pytorch.org/docs/stable/autograd.html#profiler) as before but with optional new parameters: ```with_stack``` and ```group_by_stack_n```. Caution: regular profiling runs should not use this feature as it adds significant overhead.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "* [Detail](https://github.com/pytorch/pytorch/pull/43898/)\n* [Documentation](https://pytorch.org/docs/stable/autograd.html)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "# Distributed Training & RPC", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "[Stable] TorchElastic now bundled into PyTorch docker image\nTorchelastic offers a strict superset of the current ```torch.distributed.launch``` CLI with the added features for fault-tolerance and elasticity. If the user is not be interested in fault-tolerance, they can get the exact functionality/behavior parity by setting ```max_restarts=0``` with the added convenience of auto-assigned ```RANK``` and ```MASTER_ADDR|PORT``` (versus manually specified in ```torch.distributed.launch)```.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "By bundling ```torchelastic``` in the same docker image as PyTorch, users can start experimenting with TorchElastic right-away without having to separately install ```torchelastic```. In addition to convenience, this work is a nice-to-have when adding support for elastic parameters in the existing Kubeflow\u2019s distributed PyTorch operators.\n* [Usage examples and how to get started](https://pytorch.org/elastic/0.2.0/examples.html)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "[Beta] Support for uneven dataset inputs in DDP", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch 1.7 introduces a new context manager to be used in conjunction with models trained using ```torch.nn.parallel.DistributedDataParallel``` to enable training with uneven dataset size across different processes. This feature enables greater flexibility when using DDP and prevents the user from having to manually ensure dataset sizes are the same across different process. With this context manager, DDP will handle uneven dataset sizes automatically, which can prevent errors or hangs at the end of", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "training.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "* [RFC](https://github.com/pytorch/pytorch/issues/38174)\n* [Documentation](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel.join)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "[Beta] NCCL Reliability - Async Error/Timeout Handling", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "In the past, NCCL training runs would hang indefinitely due to stuck collectives, leading to a very unpleasant experience for users. This feature will abort stuck collectives and throw an exception/crash the process if a potential hang is detected. When used with something like torchelastic (which can recover the training process from the last checkpoint), users can have much greater reliability for distributed training. This feature is completely opt-in and sits behind an environment variable that needs to", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "be explicitly set in order to enable this functionality (otherwise users will see the same behavior as before).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "* [RFC](https://github.com/pytorch/pytorch/issues/46874)\n* [Documentation](https://pytorch.org/docs/stable/distributed.html?highlight=init_process_group#torch.distributed.init_process_group)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "[Beta] TorchScript ```rpc_remote``` and ```rpc_sync```", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "```torch.distributed.rpc.rpc_async``` has been available in TorchScript in prior releases. For PyTorch 1.7, this functionality will be extended the remaining two core RPC APIs, ```torch.distributed.rpc.rpc_sync``` and ```torch.distributed.rpc.remote```. This will complete the major RPC APIs targeted for support in TorchScript, it allows users to use the existing python RPC APIs within TorchScript (in a script function or script method, which releases the python Global Interpreter Lock) and could possibly", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "improve application performance in multithreaded environment.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "* [Documentation](https://pytorch.org/docs/stable/rpc.html#rpc)\n* [Usage examples](https://github.com/pytorch/pytorch/blob/58ed60c259834e324e86f3e3118e4fcbbfea8dd1/torch/testing/_internal/distributed/rpc/jit/rpc_test.py#L505-L525)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "[Beta] Distributed optimizer with TorchScript support", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch provides a broad set of optimizers for training algorithms, and these have been used repeatedly as part of the python API. However, users often want to use multithreaded training instead of multiprocess training as it provides better resource utilization and efficiency in the context of large scale distributed training (e.g. Distributed Model Parallel) or any RPC-based training application). Users couldn\u2019t do this with with distributed optimizer before because we need to get rid of the python Global", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "Interpreter Lock (GIL) limitation to achieve this.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "In PyTorch 1.7, we are enabling the TorchScript support in distributed optimizer to remove the GIL, and make it possible to run optimizer in multithreaded applications. The new distributed optimizer has the exact same interface as before but it automatically converts optimizers within each worker into TorchScript to make each GIL free. This is done by leveraging a functional optimizer concept and allowing the distributed optimizer to convert the computational portion of the optimizer into TorchScript. This", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "will help use cases like distributed model parallel training and improve performance using multithreading.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "Currently, the only optimizer that supports automatic conversion with TorchScript is ```Adagrad``` and all other optimizers will still work as before without TorchScript support. We are working on expanding the coverage to all PyTorch optimizers and expect more to come in future releases. The usage to enable TorchScript support is automatic and exactly the same with existing python APIs, here is an example of how to use this:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "```python\nimport torch.distributed.autograd as dist_autograd\nimport torch.distributed.rpc as rpc\nfrom torch import optim\nfrom torch.distributed.optim import DistributedOptimizer\n\nwith dist_autograd.context() as context_id:\n # Forward pass.\n rref1 = rpc.remote(\"worker1\", torch.add, args=(torch.ones(2), 3))\n rref2 = rpc.remote(\"worker1\", torch.add, args=(torch.ones(2), 1))\n loss = rref1.to_here() + rref2.to_here()\n\n # Backward pass.\n dist_autograd.backward(context_id, [loss.sum()])", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "# Optimizer, pass in optim.Adagrad, DistributedOptimizer will\n # automatically convert/compile it to TorchScript (GIL-free)\n dist_optim = DistributedOptimizer(\n optim.Adagrad,\n [rref1, rref2],\n lr=0.05,\n )\n dist_optim.step(context_id)\n ```\n* [RFC](https://github.com/pytorch/pytorch/issues/46883)\n* [Documentation](https://pytorch.org/docs/stable/rpc.html#module-torch.distributed.optim)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "[Beta] Enhancements to RPC-based Profiling\nSupport for using the PyTorch profiler in conjunction with the RPC framework was first introduced in PyTorch 1.6. In PyTorch 1.7, the following enhancements have been made:\n* Implemented better support for profiling TorchScript functions over RPC\n* Achieved parity in terms of profiler features that work with RPC\n* Added support for asynchronous RPC functions on the server-side (functions decorated with ```rpc.functions.async_execution)```.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "Users are now able to use familiar profiling tools such as with ```torch.autograd.profiler.profile()``` and ```with torch.autograd.profiler.record_function```, and this works transparently with the RPC framework with full feature support, profiles asynchronous functions, and TorchScript functions.\n* [Design doc](https://github.com/pytorch/pytorch/issues/39675)\n* [Usage examples](https://pytorch.org/tutorials/recipes/distributed_rpc_profiling.html)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "[Prototype] Windows support for Distributed Training\nPyTorch 1.7 brings prototype support for ```DistributedDataParallel``` and collective communications on the Windows platform. In this release, the support only covers Gloo-based ```ProcessGroup``` and ```FileStore```.\n\nTo use this feature across multiple machines, please provide a file from a shared file system in ```init_process_group```.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "```python\n# initialize the process group\ndist.init_process_group(\n \"gloo\",\n # multi-machine example:\n # init_method = \"file://////{machine}/{share_folder}/file\"\n init_method=\"file:///{your local file path}\",\n rank=rank,\n world_size=world_size\n)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "model = DistributedDataParallel(local_model, device_ids=[rank])\n```\n* [Design doc](https://github.com/pytorch/pytorch/issues/42095)\n* [Documentation](https://pytorch.org/docs/master/distributed.html#backends-that-come-with-pytorch)\n* Acknowledgement ([gunandrose4u](https://github.com/gunandrose4u))", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "# Mobile\nPyTorch Mobile supports both [iOS](https://pytorch.org/mobile/ios) and [Android](https://pytorch.org/mobile/android/) with binary packages available in [Cocoapods](https://cocoapods.org/) and [JCenter](https://mvnrepository.com/repos/jcenter) respectively. You can learn more about PyTorch Mobile [here](https://pytorch.org/mobile/home/).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "[Beta] PyTorch Mobile Caching allocator for performance improvements", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "On some mobile platforms, such as Pixel, we observed that memory is returned to the system more aggressively. This results in frequent page faults as PyTorch being a functional framework does not maintain state for the operators. Thus outputs are allocated dynamically on each execution of the op, for the most ops. To ameliorate performance penalties due to this, PyTorch 1.7 provides a simple caching allocator for CPU. The allocator caches allocations by tensor sizes and, is currently, available only via the", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch C++ API. The caching allocator itself is owned by client and thus the lifetime of the allocator is also maintained by client code. Such a client owned caching allocator can then be used with scoped guard, ```c10::WithCPUCachingAllocatorGuard```, to enable the use of cached allocation within that scope.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "**Example usage:**", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "```python\n#include \n.....\nc10::CPUCachingAllocator caching_allocator;\n // Owned by client code. Can be a member of some client class so as to tie the\n // the lifetime of caching allocator to that of the class.\n.....\n{\n c10::optional caching_allocator_guard;\n if (FLAGS_use_caching_allocator) {\n caching_allocator_guard.emplace(&caching_allocator);\n }\n ....\n model.forward(..);\n}\n...\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "**NOTE**: Caching allocator is only available on mobile builds, thus the use of caching allocator outside of mobile builds won\u2019t be effective.\n* [Documentation](https://github.com/pytorch/pytorch/blob/master/c10/mobile/CPUCachingAllocator.h#L13-L43)\n* [Usage examples](https://github.com/pytorch/pytorch/blob/master/binaries/speed_benchmark_torch.cc#L207)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "# torchvision", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "[Stable] Transforms now support Tensor inputs, batch computation, GPU, and TorchScript\ntorchvision transforms are now inherited from ```nn.Module``` and can be torchscripted and applied on torch Tensor inputs as well as on PIL images. They also support Tensors with batch dimensions and work seamlessly on CPU/GPU devices:\n```python\nimport torch\nimport torchvision.transforms as T\n\n# to fix random seed, use torch.manual_seed\n# instead of random.seed\ntorch.manual_seed(12)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "transforms = torch.nn.Sequential(\n T.RandomCrop(224),\n T.RandomHorizontalFlip(p=0.3),\n T.ConvertImageDtype(torch.float),\n T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n)\nscripted_transforms = torch.jit.script(transforms)\n# Note: we can similarly use T.Compose to define transforms\n# transforms = T.Compose([...]) and \n# scripted_transforms = torch.jit.script(torch.nn.Sequential(*transforms.transforms))", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "tensor_image = torch.randint(0, 256, size=(3, 256, 256), dtype=torch.uint8)\n# works directly on Tensors\nout_image1 = transforms(tensor_image)\n# on the GPU\nout_image1_cuda = transforms(tensor_image.cuda())\n# with batches\nbatched_image = torch.randint(0, 256, size=(4, 3, 256, 256), dtype=torch.uint8)\nout_image_batched = transforms(batched_image)\n# and has torchscript support\nout_image2 = scripted_transforms(tensor_image)\n```\nThese improvements enable the following new features:\n* support for GPU acceleration", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "* batched transformations e.g. as needed for videos\n* transform multi-band torch tensor images (with more than 3-4 channels)\n* torchscript transforms together with your model for deployment\n**Note:** Exceptions for TorchScript support includes ```Compose```, ```RandomChoice```, ```RandomOrder```, ```Lambda``` and those applied on PIL images, such as ```ToPILImage```.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "[Stable] Native image IO for JPEG and PNG formats\ntorchvision 0.8.0 introduces native image reading and writing operations for JPEG and PNG formats. Those operators support TorchScript and return ```CxHxW``` tensors in ```uint8``` format, and can thus be now part of your model for deployment in C++ environments.\n```python\nfrom torchvision.io import read_image\n\n# tensor_image is a CxHxW uint8 Tensor\ntensor_image = read_image('path_to_image.jpeg')", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "# or equivalently\nfrom torchvision.io import read_file, decode_image\n# raw_data is a 1d uint8 Tensor with the raw bytes\nraw_data = read_file('path_to_image.jpeg')\ntensor_image = decode_image(raw_data)\n\n# all operators are torchscriptable and can be\n# serialized together with your model torchscript code\nscripted_read_image = torch.jit.script(read_image)\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "[Stable] RetinaNet detection model\nThis release adds pretrained models for RetinaNet with a ResNet50 backbone from [Focal Loss for Dense Object Detection](https://arxiv.org/abs/1708.02002).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "[Beta] New Video Reader API\nThis release introduces a new video reading abstraction, which gives more fine-grained control of iteration over videos. It supports image and audio, and implements an iterator interface so that it is interoperable with other the python libraries such as itertools.\n```python\nfrom torchvision.io import VideoReader", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "# stream indicates if reading from audio or video\nreader = VideoReader('path_to_video.mp4', stream='video')\n# can change the stream after construction\n# via reader.set_current_stream\n\n# to read all frames in a video starting at 2 seconds\nfor frame in reader.seek(2):\n # frame is a dict with \"data\" and \"pts\" metadata\n print(frame[\"data\"], frame[\"pts\"])", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "# because reader is an iterator you can combine it with\n# itertools\nfrom itertools import takewhile, islice\n# read 10 frames starting from 2 seconds\nfor frame in islice(reader.seek(2), 10):\n pass\n \n# or to return all frames between 2 and 5 seconds\nfor frame in takewhile(lambda x: x[\"pts\"] < 5, reader):\n pass\n```\n**Notes:**\n* In order to use the Video Reader API beta, you must compile torchvision from source and have ffmpeg installed in your system.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "* The VideoReader API is currently released as beta and its API may change following user feedback.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "# torchaudio\nWith this release, torchaudio is expanding its support for models and [end-to-end applications](https://github.com/pytorch/audio/tree/master/examples), adding a wav2letter training pipeline and end-to-end text-to-speech and source separation pipelines. Please file an issue on [github](https://github.com/pytorch/audio/issues/new?template=questions-help-support.md) to provide feedback on them.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "[Stable] Speech Recognition\nBuilding on the addition of the wav2letter model for speech recognition in the last release, we\u2019ve now added an [example wav2letter training pipeline](https://github.com/pytorch/audio/tree/master/examples/pipeline_wav2letter) with the LibriSpeech dataset.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "[Stable] Text-to-speech\nWith the goal of supporting text-to-speech applications, we added a vocoder based on the WaveRNN model, based on the implementation from [this repository](https://github.com/fatchord/WaveRNN). The original implementation was introduced in \"Efficient Neural Audio Synthesis\". We also provide an [example WaveRNN training pipeline](https://github.com/pytorch/audio/tree/master/examples/pipeline_wavernn) that uses the LibriTTS dataset added to torchaudio in this release.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "[Stable] Source Separation\nWith the addition of the ConvTasNet model, based on the paper \"Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude Masking for Speech Separation,\" torchaudio now also supports source separation. An [example ConvTasNet training pipeline](https://github.com/pytorch/audio/tree/master/examples/source_separation) is provided with the wsj-mix dataset.\n\nCheers!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'Adding a Contributor License Agreement for PyTorch'\nauthor: Team PyTorch\n---\n\nTo ensure the ongoing growth and success of the framework, we're introducing the use of the Apache Contributor License Agreement (CLA) for PyTorch. We care deeply about the broad community of contributors who make PyTorch such a great framework, so we want to take a moment to explain why we are adding a CLA.", "metadata": {"source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Why Does PyTorch Need a CLA?\n\nCLAs help clarify that users and maintainers have the relevant rights to use and maintain code contributed to an open source project, while allowing contributors to retain ownership rights to their code.", "metadata": {"source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "\n \n TPU Accelerator - Num Devices\n | \n v4-64\n | \n
\n \n GPT2 Parameter Count\n | \n 16B\n | \n
\n \n Layers Wrapped with FSDP\n | \n GPT2Block\n | \n
\n \n TFLOPs / Chip\n | \n 275\n | \n
\n \n PFLOPs / Step\n | \n 50\n | \n
\n \n Hardware Utilization\n | \n 39%\n | \n
\n
\n\n\n\n### Differences Between FSDP & GSPMD", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
+{"page_content": "FSDP is a data parallelism technique that reduces device memory footprint by storing model parameters, optimizer states, and gradients all sharded. Note that the actual computation is still local to the device and requires all-gathering the sharded model parameters for both forward and backward passes, hence the name \u201cdata parallel\u201d. FSDP is one of the newest additions to PyTorch/XLA to scale large model training.\n\nGSPMD on the other hand, is a general parallelization system that enables various types of parallelisms, including both data and model parallelisms. PyTorch/XLA provides a sharding annotation API and XLAShardedTensor abstraction, so a user can annotate any tensor with sharding specs in the PyTorch program. Developers don\u2019t need to manually implement sharded computations or inject collective communications ops to get it right. The XLA compiler does the work so that each computation can run in a distributed manner on multiple devices.\n\n\n### Examples & Preliminary Results", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
+{"page_content": "To learn about PyTorch/XLA parallelism sharding API, visit our [RFC](https://github.com/pytorch/xla/issues/3871) and see the [Sample Code](https://github.com/pytorch/xla/tree/r2.0/test/spmd) references. Below is a simple example to enable data and model parallelism.\n\n```\nmodel = SimpleLinear().to(xm.xla_device())\n# Sharding annotate the linear layer weights.\nxs.mark_sharding(model.fc1.weight, mesh, partition_spec)\n# Training loop\nmodel.train()\nfor step, (data, target) in enumerate(loader):\n optimizer.zero_grad()\n data = data.to(xm.xla_device())\n target = target.to(xm.xla_device())\n # Sharding annotate input data, we can shard any input\n # dimensions. Sharidng the batch dimension enables \n # data parallelism, sharding the feature dimension enables\n # spatial partitioning.\n xs.mark_sharding(data, mesh, partition_spec)\n ouput = model(data)\n loss = loss_fn(output, target)\n optimizer.step()\n xm.mark_step()\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
+{"page_content": "The following graph highlights the memory efficiency benefits of PyTorch/XLA FSDP and SPMD on Cloud TPU v4-8 running ResNet50.\n\n\n{:width=\"100%\"}\n\n\n\n## Closing Thoughts\u2026\n\nWe are excited to bring these features to the PyTorch community, and this is really just the beginning. Areas like dynamic shapes, deeper support for OpenXLA and many others are in development and we plan to put out more blogs to dive into the details. PyTorch/XLA is developed fully open source and we invite you to join the community of developers by filing issues, submitting pull requests, and sending RFCs on [GitHub](github.com/pytorch/xla). You can try PyTorch/XLA on a variety of XLA devices including TPUs and GPUs. [Here](https://colab.sandbox.google.com/github/pytorch/xla/blob/master/contrib/colab/getting-started.ipynb) is how to get started.\n\nCongratulations again to the PyTorch community on this milestone!\n\nCheers,\n\nThe PyTorch Team at Google", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"Compromised PyTorch-nightly dependency chain between December 25th and December 30th, 2022.\"\nauthor: The PyTorch Team\n---\n\nIf you installed PyTorch-nightly on Linux via pip between December 25, 2022 and December 30, 2022, please uninstall it and torchtriton immediately, and use the latest nightly binaries (newer than Dec 30th 2022).\n\n```bash\n$ pip3 uninstall -y torch torchvision torchaudio torchtriton\n$ pip3 cache purge\n```\n\nPyTorch-nightly Linux packages installed via pip during that time installed a dependency, torchtriton, which was compromised on the Python Package Index (PyPI) code repository and ran a malicious binary. This is what is known as a supply chain attack and directly affects dependencies for packages that are hosted on public package indices.\n\n**NOTE:** Users of the PyTorch **stable** packages **are not** affected by this issue.\n\n\n## How to check if your Python environment is affected", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}}
+{"page_content": "The following command searches for the malicious binary in the torchtriton package (`PYTHON_SITE_PACKAGES/triton/runtime/triton`) and prints out whether your current Python environment is affected or not.\n\n```bash\npython3 -c \"import pathlib;import importlib.util;s=importlib.util.find_spec('triton'); affected=any(x.name == 'triton' for x in (pathlib.Path(s.submodule_search_locations[0] if s is not None else '/' ) / 'runtime').glob('*'));print('You are {}affected'.format('' if affected else 'not '))\"\n```\n\nThe malicious binary is executed when the triton package is imported, which requires explicit code to do and is not PyTorch\u2019s default behavior.\n\n## The Background", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}}
+{"page_content": "## The Background\n\nAt around 4:40pm GMT on December 30 (Friday), we learned about a malicious dependency package (`torchtriton`) that was uploaded to the Python Package Index (PyPI) code repository with the same package name as the one we ship on the [PyTorch nightly package index](https://download.pytorch.org/whl/nightly). Since the [PyPI index takes precedence](https://github.com/pypa/pip/issues/8606), this malicious package was being installed instead of the version from our official repository. This design enables somebody to register a package by the same name as one that exists in a third party index, and pip will install their version by default.\n\nThis malicious package has the same name `torchtriton` but added in code that uploads sensitive data from the machine.\n\n\n## What we know\n\ntorchtriton on PyPI contains a malicious triton binary which is installed at `PYTHON_SITE_PACKAGES/triton/runtime/triton`. Its SHA256 hash is listed below.", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}}
+{"page_content": "`SHA256(triton)= 2385b29489cd9e35f92c072780f903ae2e517ed422eae67246ae50a5cc738a0e`\n\nThe binary\u2019s main function does the following:\n\n- Get system information\n - nameservers from `/etc/resolv.conf`\n - hostname from `gethostname()`\n - current username from `getlogin()`\n - current working directory name from `getcwd()`\n - environment variables\n- Read the following files\n - `/etc/hosts`\n - `/etc/passwd`\n - The first 1,000 files in `$HOME/*`\n - `$HOME/.gitconfig`\n - `$HOME/.ssh/*`\n- Upload all of this information, including file contents, via encrypted DNS queries to the domain *.h4ck[.]cfd, using the DNS server wheezy[.]io\n\nThe binary\u2019s file upload functionality is limited to files less than 99,999 bytes in size. It also uploads only the first 1,000 files in $HOME (but all files < 99,999 bytes in the .ssh directory).\n\n## Steps taken towards mitigation", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}}
+{"page_content": "- torchtriton has been removed as a dependency for our nightly packages and replaced with pytorch-triton ([pytorch/pytorch#91539](https://github.com/pytorch/pytorch/pull/91539)) and a dummy package registered on PyPI (so that this issue doesn\u2019t repeat)\n- All nightly packages that depend on torchtriton have been removed from our package indices at https://download.pytorch.org until further notice\n- We have reached out to the PyPI security team to get proper ownership of the `torchtriton` package on PyPI and to delete the malicious version", "metadata": {"source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'Running PyTorch Models on Jetson Nano'\nauthor: Jeff Tang, Hamid Shojanazeri, Geeta Chauhan\nfeatured-img: 'assets/images/pytorch-logo.jpg'\n---\n\n### Overview\nNVIDIA [Jetson Nano](https://developer.nvidia.com/embedded/jetson-nano-developer-kit), part of the [Jetson family of products](https://developer.nvidia.com/embedded/jetson-modules) or Jetson modules, is a small yet powerful Linux (Ubuntu) based embedded computer with 2/4GB GPU. With it, you can run many PyTorch models efficiently. This document summarizes our experience of running different deep learning models using 3 different mechanisms on Jetson Nano:\n\n 1. Jetson Inference the higher-level NVIDIA API that has built-in support for running most common computer vision models which can be transfer-learned with PyTorch on the Jetson platform.", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
+{"page_content": "2. TensorRT, an SDK for high-performance inference from NVIDIA that requires the conversion of a PyTorch model to ONNX, and then to the TensorRT engine file that the TensorRT runtime can run.\n\n 3. PyTorch with the direct PyTorch API `torch.nn` for inference.\n\n### Setting up Jetson Nano\nAfter purchasing a Jetson Nano [here](https://developer.nvidia.com/buy-jetson?product=jetson_nano&location=US), simply follow the clear step-by-step [instructions](https://developer.nvidia.com/embedded/learn/get-started-jetson-nano-devkit) to download and write the Jetson Nano Developer Kit SD Card Image to a microSD card, and complete the setup. After the setup is done and the Nano is booted, you\u2019ll see the standard Linux prompt along with the username and the Nano name used in the setup.\n\nTo check the GPU status on Nano, run the following commands:\n\n```\nsudo pip3 install jetson-stats\nsudo jtop\n```\n\nYou\u2019ll see information, including:", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
+{"page_content": "\n

\n
\n\nYou can also see the installed CUDA version:\n\n```\n$ ls -lt /usr/local\nlrwxrwxrwx 1 root root 22 Aug 2 01:47 cuda -> /etc/alternatives/cuda\nlrwxrwxrwx 1 root root 25 Aug 2 01:47 cuda-10 -> /etc/alternatives/cuda-10\ndrwxr-xr-x 12 root root 4096 Aug 2 01:47 cuda-10.2\n```\n\nTo use a camera on Jetson Nano, for example, Arducam 8MP IMX219, follow the instructions [here](https://www.arducam.com/docs/camera-for-jetson-nano/mipi-camera-modules-for-jetson-nano/driver-installation/) or run the commands below after [installing a camera module](https://developer.nvidia.com/embedded/learn/jetson-nano-2gb-devkit-user-guide#id-.JetsonNano2GBDeveloperKitUserGuidevbatuu_v1.0-Camera):\n\n```\ncd ~\nwget https://github.com/ArduCAM/MIPI_Camera/releases/download/v0.0.3/install_full.sh\nchmod +x install_full.sh\n./install_full.sh -m arducam\n```", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
+{"page_content": "Another way to do this is to use the original Jetson Nano camera driver:\n\n```\nsudo dpkg -r arducam-nvidia-l4t-kernel\nsudo shutdown -r now\n```\n\nThen, use ls /dev/video0 to confirm the camera is found:\n\n```\n$ ls /dev/video0\n/dev/video0\n```\n\nAnd finally, the following command to see the camera in action:\n\n```\nnvgstcapture-1.0 --orientation=2\n```\n\n### Using Jetson Inference\nNVIDIA [Jetson Inference](https://github.com/dusty-nv/jetson-inference) API offers the easiest way to run image recognition, object detection, semantic segmentation, and pose estimation models on Jetson Nano. Jetson Inference has TensorRT built-in, so it\u2019s very fast. \n\nTo test run Jetson Inference, first clone the repo and download the models:\n\n```\ngit clone --recursive https://github.com/dusty-nv/jetson-inference\ncd jetson-inference\n```\n\nThen use the pre-built [Docker Container](https://github.com/dusty-nv/jetson-inference/blob/master/docs/jetpack-setup-2.md) that already has PyTorch installed to test run the models:", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
+{"page_content": "```\ndocker/run.sh --volume ~/jetson_inference:/jetson_inference\n```\n\nTo run image recognition, object detection, semantic segmentation, and pose estimation models on test images, use the following:\n\n```\ncd build/aarch64/bin\n./imagenet.py images/jellyfish.jpg /jetson_inference/jellyfish.jpg\n./segnet.py images/dog.jpg /jetson_inference/dog.jpeg\n./detectnet.py images/peds_0.jpg /jetson_inference/peds_0.jpg\n./posenet.py images/humans_0.jpg /jetson_inference/pose_humans_0.jpg\n```\n\nFour result images from running the four different models will be generated. Exit the docker image to see them:\n\n```\n$ ls -lt ~/jetson_inference/\n-rw-r--r-- 1 root root 68834 Oct 15 21:30 pose_humans_0.jpg\n-rw-r--r-- 1 root root 914058 Oct 15 21:30 peds_0.jpg\n-rw-r--r-- 1 root root 666239 Oct 15 21:30 dog.jpeg\n-rw-r--r-- 1 root root 179760 Oct 15 21:29 jellyfish.jpg\n```", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
+{"page_content": "\n

\n

\n
\n\n\n\n

\n

\n
\n\nYou can also use the docker image to run PyTorch models because the image has PyTorch, torchvision and torchaudio installed:\n\n```\n# pip list|grep torch\ntorch (1.9.0)\ntorchaudio (0.9.0a0+33b2469)\ntorchvision (0.10.0a0+300a8a4)\n```", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
+{"page_content": "Although Jetson Inference includes models already converted to the TensorRT engine file format, you can fine-tune the models by following the steps in Transfer Learning with PyTorch (for Jetson Inference) [here](https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-transfer-learning.md).\n\n### Using TensorRT\n[TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/) is an SDK for high-performance inference from NVIDIA. Jetson Nano supports TensorRT via the Jetpack SDK, included in the SD Card image used to set up Jetson Nano. To confirm that TensorRT is already installed in Nano, `run dpkg -l|grep -i tensorrt`:\n\n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
+{"page_content": "Theoretically, TensorRT can be used to \u201ctake a trained PyTorch model and optimize it to run more efficiently during inference on an NVIDIA GPU.\u201d Follow the instructions and code in the [notebook](https://github.com/NVIDIA/TensorRT/blob/master/quickstart/IntroNotebooks/4.%20Using%20PyTorch%20through%20ONNX.ipynb) to see how to use PyTorch with TensorRT through ONNX on a torchvision Resnet50 model:\n\n1. How to convert the model from PyTorch to ONNX;\n\n2. How to convert the ONNX model to a TensorRT engine file; \n\n3. How to run the engine file with the TensorRT runtime for performance improvement: inference time improved from the original 31.5ms/19.4ms (FP32/FP16 precision) to 6.28ms (TensorRT).", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
+{"page_content": "You can replace the Resnet50 model in the notebook code with another PyTorch model, go through the conversion process above, and run the finally converted model TensorRT engine file with the TensorRT runtime to see the optimized performance. But be aware that due to the Nano GPU memory size, models larger than 100MB are likely to fail to run, with the following error information:\n\n`Error Code 1: Cuda Runtime (all CUDA-capable devices are busy or unavailable)`\n\nYou may also see an error when converting a PyTorch model to ONNX model, which may be fixed by replacing: \n\n`torch.onnx.export(resnet50, dummy_input, \"resnet50_pytorch.onnx\", verbose=False)`\n\nwith:\n\n`torch.onnx.export(model, dummy_input, \"deeplabv3_pytorch.onnx\", opset_version=11, verbose=False)`\n\n### Using PyTorch \nFirst, to download and install PyTorch 1.9 on Nano, run the following commands (see [here](https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048) for more information):", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
+{"page_content": "```\nwget https://nvidia.box.com/shared/static/p57jwntv436lfrd78inwl7iml6p13fzh.whl -O torch-1.8.0-cp36-cp36m-linux_aarch64.whl -O torch-1.9.0-cp36-cp36m-linux_aarch64.whl\nsudo apt-get install python3-pip libopenblas-base libopenmpi-dev \npip3 install Cython\npip3 install numpy torch-1.9.0-cp36-cp36m-linux_aarch64.whl\n```\n\nTo download and install torchvision 0.10 on Nano, run the commands below:\n\n```\nhttps://drive.google.com/uc?id=1tU6YlPjrP605j4z8PMnqwCSoP6sSC91Z\npip3 install torchvision-0.10.0a0+300a8a4-cp36-cp36m-linux_aarch64.whl\n```\n\nAfter the steps above, run this to confirm:\n```\n$ pip3 list|grep torch\ntorch (1.9.0)\ntorchvision (0.10.0)\n```\n\nYou can also use the docker image described in the section *Using Jetson Inference* (which also has PyTorch and torchvision installed), to skip the manual steps above.\n\nThe official [YOLOv5](https://github.com/ultralytics/yolov5) repo is used to run the PyTorch YOLOv5 model on Jetson Nano. After logging in to Jetson Nano, follow the steps below:", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
+{"page_content": "* Get the repo and install what\u2019s required:\n\n```\ngit clone https://github.com/ultralytics/yolov5\ncd yolov5\npip install -r requirements.txt\n```\n\n* Run `python3 detect.py`, which by default uses the PyTorch yolov5s.pt model. You should see something like:\n\n```\ndetect: weights=yolov5s.pt, source=data/images, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False\nYOLOv5 \ud83d\ude80 v5.0-499-g48b00db torch 1.9.0 CUDA:0 (NVIDIA Tegra X1, 3956.1015625MB)\n\nFusing layers... \nModel Summary: 224 layers, 7266973 parameters, 0 gradients\nimage 1/5 /home/jeff/repos/yolov5-new/yolov5/data/images/bus.jpg: 640x480 4 persons, 1 bus, 1 fire hydrant, Done. (0.142s)\n...\n```", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
+{"page_content": "**The inference time on Jetson Nano GPU is about 140ms, more than twice as fast as the inference time on iOS or Android (about 330ms).**\n\nIf you get an error `\u201cImportError: The _imagingft C module is not installed.\u201d` then you need to reinstall pillow:\n```\nsudo apt-get install libpng-dev\nsudo apt-get install libfreetype6-dev\npip3 uninstall pillow\npip3 install --no-cache-dir pillow\n```\n\nAfter successfully completing the `python3 detect.py` run, the object detection results of the test images located in `data/images` will be in the `runs/detect/exp` directory. To test the detection with a live webcam instead of local images, use the `--source 0` parameter when running `python3 detect.py`):\n\n```\n~/repos/yolov5$ ls -lt runs/detect/exp10\ntotal 1456\n-rw-rw-r-- 1 jeff jeff 254895 Oct 15 16:12 zidane.jpg\n-rw-rw-r-- 1 jeff jeff 202674 Oct 15 16:12 test3.png\n-rw-rw-r-- 1 jeff jeff 217117 Oct 15 16:12 test2.jpg\n-rw-rw-r-- 1 jeff jeff 305826 Oct 15 16:12 test1.png\n-rw-rw-r-- 1 jeff jeff 495760 Oct 15 16:12 bus.jpg\n```", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
+{"page_content": "Using the same test files used in the PyTorch iOS YOLOv5 demo app or Android YOLOv5 demo app, you can compare the results generated with running the YOLOv5 PyTorch model on mobile devices and Jetson Nano:\n\n\n

\n

\n
\nFigure 1. PyTorch YOLOv5 on Jetson Nano. \n\n\n

\n

\n
\nFigure 2. PyTorch YOLOv5 on iOS.", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
+{"page_content": "\n

\n

\n
\nFigure 3. PyTorch YOLOv5 on Android. \n\n### Summary\nBased on our experience of running different PyTorch models for potential demo apps on Jetson Nano, we see that even Jetson Nano, a lower-end of the Jetson family of products, provides a powerful GPU and embedded system that can directly run some of the latest PyTorch models, pre-trained or transfer learned, efficiently.\n\nBuilding PyTorch demo apps on Jetson Nano can be similar to building PyTorch apps on Linux, but you can also choose to use TensorRT after converting the PyTorch models to the TensorRT engine file format.", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
+{"page_content": "But if you just need to run some common computer vision models on Jetson Nano using NVIDIA\u2019s Jetson Inference which supports image recognition, object detection, semantic segmentation, and pose estimation models, then this is the easiest way.\n\n\n### References\nTorch-TensorRT, a compiler for PyTorch via TensorRT:\n[https://github.com/NVIDIA/Torch-TensorRT/](https://github.com/NVIDIA/Torch-TensorRT/)\n\nJetson Inference docker image details:\n[https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-docker.md](https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-docker.md)\n\nA guide to using TensorRT on the NVIDIA Jetson Nano:\n[https://docs.donkeycar.com/guide/robot_sbc/tensorrt_jetson_nano/](https://docs.donkeycar.com/guide/robot_sbc/tensorrt_jetson_nano/) \nincluding:", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
+{"page_content": "1. Use Jetson as a portable GPU device to run an NN chess engine model: \n[https://medium.com/@ezchess/jetson-lc0-running-leela-chess-zero-on-nvidia-jetson-a-portable-gpu-device-a213afc9c018](https://medium.com/@ezchess/jetson-lc0-running-leela-chess-zero-on-nvidia-jetson-a-portable-gpu-device-a213afc9c018)\n\n2. A MaskEraser app using PyTorch and torchvision, installed directly with pip:\n[https://github.com/INTEC-ATI/MaskEraser#install-pytorch](https://github.com/INTEC-ATI/MaskEraser#install-pytorch)", "metadata": {"source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch 1.9 Release, including torch.linalg and Mobile Interpreter'\nauthor: Team PyTorch \n---\n\nWe are excited to announce the release of PyTorch 1.9. The release is composed of more than 3,400 commits since 1.8, made by 398 contributors. The release notes are available [here](https://github.com/pytorch/pytorch/releases). Highlights include:\n1. Major improvements to support scientific computing, including *torch.linalg*, *torch.special*, and Complex Autograd\n2. Major improvements in on-device binary size with Mobile Interpreter\n3. Native support for elastic-fault tolerance training through the upstreaming of TorchElastic into PyTorch Core\n4. Major updates to the PyTorch RPC framework to support large scale distributed training with GPU support\n5. New APIs to optimize performance and packaging for model inference deployment \n6. Support for Distributed training, GPU utilization and SM efficiency in the PyTorch Profiler", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "Along with 1.9, we are also releasing major updates to the PyTorch libraries, which you can read about in [this blog post](https://pytorch.org/blog/pytorch-1.9-new-library-releases/). \n\nWe\u2019d like to thank the community for their support and work on this latest release. We\u2019d especially like to thank Quansight and Microsoft for their contributions.\n\nFeatures in PyTorch releases are classified as Stable, Beta, and Prototype. You can learn more about the definitions in [this blog post](https://pytorch.org/blog/pytorch-feature-classification-changes/). \n\n# Frontend APIs\n\n### (Stable) *torch.linalg*", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "In 1.9, the *torch.linalg* module is moving to a stable release. Linear algebra is essential to deep learning and scientific computing, and the *torch.linalg* module extends PyTorch\u2019s support for it with implementations of every function from [NumPy\u2019s linear algebra module](https://numpy.org/doc/stable/reference/routines.linalg.html) (now with support for accelerators and autograd) and more, like [*torch.linalg.matrix_norm*](https://pytorch.org/docs/1.9.0/generated/torch.linalg.matrix_norm.html?highlight=matrix_norm#torch.linalg.matrix_norm) and [*torch.linalg.householder_product*](https://pytorch.org/docs/1.9.0/generated/torch.linalg.householder_product.html?highlight=householder_product#torch.linalg.householder_product). This makes the module immediately familiar to users who have worked with NumPy. Refer to [the documentation](https://pytorch.org/docs/1.9.0/linalg.html?highlight=linalg#module-torch.linalg) here.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "We plan to publish another blog post with more details on the *torch.linalg* module next week!\n\n### (Stable) Complex Autograd \n\nThe Complex Autograd feature, released as a beta in PyTorch 1.8, is now stable. Since the beta release, we have extended support for Complex Autograd for over 98% operators in PyTorch 1.9, improved testing for complex operators by adding more OpInfos, and added greater validation through TorchAudio migration to native complex tensors (refer to [this issue](https://github.com/pytorch/audio/issues/1337)). \n\nThis feature provides users the functionality to calculate complex gradients and optimize real valued loss functions with complex variables. This is a required feature for multiple current and downstream prospective users of complex numbers in PyTorch like TorchAudio, ESPNet, Asteroid, and FastMRI. Refer to [the documentation](https://pytorch.org/docs/1.9.0/notes/autograd.html#autograd-for-complex-numbers) for more details. \n\n### (Stable) torch.use_deterministic_algorithms()", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "To help with debugging and writing reproducible programs, PyTorch 1.9 includes a *torch.use_determinstic_algorithms* option. When this setting is enabled, operations will behave deterministically, if possible, or throw a runtime error if they might behave nondeterministically. Here are a couple examples:\n\n```python\n>>> a = torch.randn(100, 100, 100, device='cuda').to_sparse()\n>>> b = torch.randn(100, 100, 100, device='cuda')\n\n# Sparse-dense CUDA bmm is usually nondeterministic\n>>> torch.bmm(a, b).eq(torch.bmm(a, b)).all().item()\nFalse\n\n>>> torch.use_deterministic_algorithms(True)\n\n# Now torch.bmm gives the same result each time, but with reduced performance\n>>> torch.bmm(a, b).eq(torch.bmm(a, b)).all().item()\nTrue\n\n# CUDA kthvalue has no deterministic algorithm, so it throws a runtime error\n>>> torch.zeros(10000, device='cuda').kthvalue(1)\nRuntimeError: kthvalue CUDA does not have a deterministic implementation...\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "PyTorch 1.9 adds deterministic implementations for a number of indexing operations, too, including *index_add*, *index_copy*, and *index_put with accum=False*. For more details, refer to the [documentation](https://pytorch.org/docs/1.9.0/generated/torch.use_deterministic_algorithms.html?highlight=use_deterministic#torch.use_deterministic_algorithms) and [reproducibility note](https://pytorch.org/docs/1.9.0/notes/randomness.html?highlight=reproducibility).\n\n### (Beta) *torch.special*\n\nA *torch.special* module, analogous to [SciPy\u2019s special module](https://docs.scipy.org/doc/scipy/reference/special.html), is now available in beta. This module contains many functions useful for scientific computing and working with distributions such as *iv*, *ive*, *erfcx*, *logerfc*, and *logerfcx*. Refer to [the documentation](https://pytorch.org/docs/master/special.html) for more details. \n\n### (Beta) nn.Module parameterization", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "```nn.Module``` parameterization allows users to parametrize any parameter or buffer of an ```nn.Module``` without modifying the ```nn.Module``` itself. It allows you to constrain the space in which your parameters live without the need for special optimization methods.\n\nThis also contains a new implementation of the ```spectral_norm``` parametrization for PyTorch 1.9. More parametrization will be added to this feature (weight_norm, matrix constraints and part of pruning) for the feature to become stable in 1.10. For more details, refer to the [documentation](https://pytorch.org/docs/1.9.0/generated/torch.nn.utils.parametrizations.spectral_norm.html?highlight=parametrize) and [tutorial](https://pytorch.org/tutorials/intermediate/parametrizations.html).\n\n# PyTorch Mobile\n\n### (Beta) Mobile Interpreter \n\nWe are releasing Mobile Interpreter, a streamlined version of the PyTorch runtime, in beta. The Interpreter will execute PyTorch programs in edge devices, with reduced binary size footprint.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "Mobile Interpreter is one of the top requested features for PyTorch Mobile. This new release will significantly reduce binary size compared with the current on-device runtime. In order for you to get the binary size improvements with our interpreter (which can reduce the binary size up to ~75% for a typical application) follow these instructions. As an example, using Mobile Interpreter, we can reach 2.6 MB compressed with MobileNetV2 in arm64-v7a Android. With this latest release we are making it much simpler to integrate the interpreter by providing pre-built libraries for iOS and Android.\n\n### TorchVision Library", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "Starting from 1.9, users can use the TorchVision library on their iOS/Android apps. The Torchvision library contains the C++ TorchVision ops and needs to be linked together with the main PyTorch library for iOS, for Android it can be added as a gradle dependency. This allows using TorchVision prebuilt MaskRCNN operators for object detections and segmentation. To learn more about the library, please refer to our tutorials and [demo apps](https://github.com/pytorch/android-demo-app/tree/master/D2Go). \n\n### Demo apps", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "### Demo apps\n\nWe are releasing a new video app based on [PyTorch Video](https://pytorchvideo.org/) library and an updated speech recognition app based on the latest torchaudio, wave2vec model. Both are available on [iOS](https://github.com/pytorch/ios-demo-app) and [Android](https://github.com/pytorch/android-demo-app). In addition, we have updated the seven Computer Vision and three Natural Language Processing demo apps, including the HuggingFace DistilBERT, and the DeiT vision transformer models, with PyTorch Mobile v1.9. With the addition of these two apps, we now offer a full suite of demo apps covering image, text, audio, and video. To get started check out our [iOS demo apps](https://github.com/pytorch/ios-demo-app) and [Android demo apps](https://github.com/pytorch/android-demo-app).\n\n\n

\n
\n\n# Distributed Training\n\n### (Beta) TorchElastic is now part of core", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "[TorchElastic](https://github.com/pytorch/pytorch/issues/50621), which was open sourced over a year ago in the [pytorch/elastic](https://github.com/pytorch/elastic) github repository, is a runner and coordinator for PyTorch worker processes. Since then, it has been adopted by various distributed torch use-cases: 1) [deepspeech.pytorch](https://medium.com/pytorch/training-deepspeech-using-torchelastic-ad013539682) 2) [pytorch-lightning](https://pytorch-lightning.readthedocs.io/en/stable/advanced/multi_gpu.html#torchelastic) 3) [Kubernetes CRD](https://github.com/pytorch/elastic/blob/master/kubernetes/README.md). Now, it is part of PyTorch core.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "As its name suggests, the core function of TorcheElastic is to gracefully handle scaling events. A notable corollary of elasticity is that peer discovery and rank assignment are built into TorchElastic enabling users to run distributed training on preemptible instances without requiring a gang scheduler. As a side note, [etcd](https://etcd.io/) used to be a hard dependency of TorchElastic. With the upstream, this is no longer the case since we have added a \u201cstandalone\u201d rendezvous based on c10d::Store. For more details, refer to the [documentation](https://pytorch.org/docs/1.9.0/distributed.elastic.html).\n\n### (Beta) Distributed Training Updates\n\nIn addition to TorchElastic, there are a number of beta features available in the distributed package:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "* **(Beta) CUDA support is available in RPC**: Compared to CPU RPC and general-purpose RPC frameworks, CUDA RPC is a much more efficient way for P2P Tensor communication. It is built on top of TensorPipe which can automatically choose a communication channel for each Tensor based on Tensor device type and channel availability on both the caller and the callee. Existing TensorPipe channels cover NVLink, InfiniBand, SHM, CMA, TCP, etc. See [this recipe](https://pytorch.org/tutorials/recipes/cuda_rpc.html) for how CUDA RPC helps to attain 34x speedup compared to CPU RPC.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "* **(Beta) ZeroRedundancyOptimizer**: ZeroRedundancyOptimizer can be used in conjunction with DistributedDataParallel to reduce the size of per-process optimizer states. The idea of ZeroRedundancyOptimizer comes from [DeepSpeed/ZeRO project](https://github.com/microsoft/DeepSpeed) and [Marian](https://github.com/marian-nmt/marian-dev), where the optimizer in each process owns a shard of model parameters and their corresponding optimizer states. When running `step()`, each optimizer only updates its own parameters, and then uses collective communication to synchronize updated parameters across all processes. Refer to [this documentation](https://pytorch.org/docs/master/distributed.optim.html) and this [tutorial](https://pytorch.org/tutorials/recipes/zero_redundancy_optimizer.html) to learn more.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "* **(Beta) Support for profiling distributed collectives**: PyTorch\u2019s profiler tools, *torch.profiler* and *torch.autograd.profiler*, are able to profile distributed collectives and point to point communication primitives including allreduce, alltoall, allgather, send/recv, etc. This is enabled for all backends supported natively by PyTorch: gloo, mpi, and nccl. This can be used to debug performance issues, analyze traces that contain distributed communication, and gain insight into performance of applications that use distributed training. To learn more, refer to [this documentation](https://pytorch.org/docs/1.9.0/distributed.html#profiling-collective-communication). \n\n# Performance Optimization and Tooling\n\n### (Stable) Freezing API", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "Module Freezing is the process of inlining module parameters and attributes values as constants into the TorchScript internal representation. This allows further optimization and specialization of your program, both for TorchScript optimizations and lowering to other backends. It is used by [optimize_for_mobile API](https://github.com/pytorch/pytorch/blob/master/torch/utils/mobile_optimizer.py), ONNX, and others. \n\nFreezing is recommended for model deployment. It helps TorchScript JIT optimizations optimize away overhead and bookkeeping that is necessary for training, tuning, or debugging PyTorch models. It enables graph fusions that are not semantically valid on non-frozen graphs - such as fusing Conv-BN. For more details, refer to the [documentation](https://pytorch.org/docs/1.9.0/generated/torch.jit.freeze.html).\n\n### (Beta) PyTorch Profiler \n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "The new PyTorch Profiler graduates to beta and leverages [Kineto](https://github.com/pytorch/kineto/) for GPU profiling, TensorBoard for visualization and is now the standard across our tutorials and documentation. \n\nPyTorch 1.9 extends support for the new *torch.profiler* API to more builds, including Windows and Mac and is recommended in most cases instead of the previous *torch.autograd.profiler* API. The new API supports existing profiler features, integrates with CUPTI library (Linux-only) to trace on-device CUDA kernels and provides support for long-running jobs, e.g.:\n\n```python\ndef trace_handler(p):\n output = p.key_averages().table(sort_by=\"self_cuda_time_total\", row_limit=10)\n print(output)\n p.export_chrome_trace(\"/tmp/trace_\" + str(p.step_num) + \".json\")", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "with profile(\n activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],\n # schedule argument specifies the iterations on which the profiler is active\n schedule=torch.profiler.schedule(\n wait=1,\n warmup=1,\n active=2),\n # on_trace_ready argument specifies the handler for the traces\n on_trace_ready=trace_handler\n) as p:\n for idx in range(8):\n model(inputs)\n # profiler will trace iterations 2 and 3, and then 6 and 7 (counting from zero)\n p.step()\n```\n\nMore usage examples can be found on the [profiler recipe page](https://pytorch.org/tutorials/recipes/recipes/profiler_recipe.html). \n\nThe PyTorch Profiler Tensorboard plugin has new features for:\n* Distributed Training summary view with communications overview for NCCL\n* GPU Utilization and SM Efficiency in Trace view and GPU operators view\n* Memory Profiling view\n* Jump to source when launched from Microsoft VSCode\n* Ability for load traces from cloud object storage systems", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "### (Beta) Inference Mode API \n\nInference Mode API allows significant speed-up for inference workloads while remaining safe and ensuring no incorrect gradients can ever be computed. It offers the best possible performance when no autograd is required. For more details, refer to [the documentation for inference mode itself](https://pytorch.org/docs/1.9.0/generated/torch.inference_mode.html?highlight=inference%20mode#torch.inference_mode) and [the documentation explaining when to use it and the difference with no_grad mode](https://pytorch.org/docs/1.9.0/notes/autograd.html#locally-disabling-gradient-computation).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "### (Beta) *torch.package* \n \n*torch.package* is a new way to package PyTorch models in a self-contained, stable format. A package will include both the model\u2019s data (e.g. parameters, buffers) and its code (model architecture). Packaging a model with its full set of Python dependencies, combined with a description of a conda environment with pinned versions, can be used to easily reproduce training. Representing a model in a self-contained artifact will also allow it to be published and transferred throughout a production ML pipeline while retaining the flexibility of a pure-Python representation. For more details, refer to [the documentation](https://pytorch.org/docs/1.9.0/package.html).\n\n### (Prototype) prepare_for_inference", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "prepare_for_inference is a new prototype feature that takes in a module and performs graph-level optimizations to improve inference performance, depending on the device. It is meant to be a PyTorch-native option that requires minimal changes to user\u2019s workflows. For more details, see [the documentation](https://github.com/pytorch/pytorch/blob/master/torch/jit/_freeze.py#L168) for the Torchscript version [here](https://github.com/pytorch/pytorch/blob/master/torch/jit/_freeze.py#L168) or the FX version [here](https://github.com/pytorch/pytorch/blob/master/torch/fx/experimental/optimization.py#L234).\n\n### (Prototype) Profile-directed typing in TorchScript", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "TorchScript has a hard requirement for source code to have type annotations in order for compilation to be successful. For a long time, it was only possible to add missing or incorrect type annotations through trial and error (i.e., by fixing the type-checking errors generated by *torch.jit.script* one by one), which was inefficient and time consuming. Now, we have enabled profile directed typing for *torch.jit.script* by leveraging existing tools like MonkeyType, which makes the process much easier, faster, and more efficient. For more details, refer to [the documentation](https://pytorch.org/docs/1.9.0/jit.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "Thanks for reading. If you\u2019re interested in these updates and want to join the PyTorch community, we encourage you to join the [discussion forums](https://discuss.pytorch.org/) and [open GitHub issues](https://github.com/pytorch/pytorch/issues). To get the latest news from PyTorch, follow us on [Facebook](https://www.facebook.com/pytorch/), [Twitter](https://twitter.com/PyTorch), [Medium](https://medium.com/pytorch), [YouTube](https://www.youtube.com/pytorch), or [LinkedIn](https://www.linkedin.com/company/pytorch). \n\nCheers!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'Announcing PyTorch Developer Day 2020'\nauthor: Team PyTorch\n---\n\nStarting this year, we plan to host two separate events for PyTorch: one for developers and users to discuss core technical development, ideas and roadmaps called **\u201cDeveloper Day\u201d**, and another for the PyTorch ecosystem and industry communities to showcase their work and discover opportunities to collaborate called **\u201cEcosystem Day\u201d** (scheduled for early 2021).\n\n\n

\n
\n\nThe **PyTorch Developer Day** (#PTD2) is kicking off on November 12, 2020, 8AM PST with a full day of technical talks on a variety of topics, including updates to the core framework, new tools and libraries to support development across a variety of domains. You'll also see talks covering the latest research around systems and tooling in ML.", "metadata": {"source": "https://pytorch.org/blog/pytorch-developer-day-2020/", "category": "pytorch blogs"}}
+{"page_content": "For Developer Day, we have an online networking event limited to people composed of PyTorch maintainers and contributors, long-time stakeholders and experts in areas relevant to PyTorch\u2019s future. Conversations from the networking event will strongly shape the future of PyTorch. Hence, invitations are required to attend the networking event.\n\nAll talks will be livestreamed and available to the public.\n* [Livestream event page](https://www.facebook.com/events/802177440559164/)\n* [Apply for an invitation to the networking event](https://pytorchdeveloperday.fbreg.com/apply)\n\nVisit the [event website](https://pytorchdeveloperday.fbreg.com/) to learn more. We look forward to welcoming you to PyTorch Developer Day on November 12th! \n\nThank you,\n\nThe PyTorch team", "metadata": {"source": "https://pytorch.org/blog/pytorch-developer-day-2020/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch 1.7 released w/ CUDA 11, New APIs for FFTs, Windows support for Distributed training and more'\nauthor: Team PyTorch\n---\n\nToday, we\u2019re announcing the availability of PyTorch 1.7, along with updated domain libraries. The PyTorch 1.7 release includes a number of new APIs including support for NumPy-Compatible FFT operations, profiling tools and major updates to both distributed data parallel (DDP) and remote procedure call (RPC) based distributed training. In addition, several features moved to [stable](https://pytorch.org/docs/stable/index.html#pytorch-documentation) including custom C++ Classes, the memory profiler, extensions via custom tensor-like objects, user async functions in RPC and a number of other features in torch.distributed such as Per-RPC timeout, DDP dynamic bucketing and RRef helper.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "A few of the highlights include:\n* CUDA 11 is now officially supported with binaries available at [PyTorch.org](http://pytorch.org/)\n* Updates and additions to profiling and performance for RPC, TorchScript and Stack traces in the autograd profiler\n* (Beta) Support for NumPy compatible Fast Fourier transforms (FFT) via torch.fft\n* (Prototype) Support for Nvidia A100 generation GPUs and native TF32 format \n* (Prototype) Distributed training on Windows now supported\n* torchvision\n * (Stable) Transforms now support Tensor inputs, batch computation, GPU, and TorchScript\n * (Stable) Native image I/O for JPEG and PNG formats\n * (Beta) New Video Reader API\n* torchaudio\n * (Stable) Added support for speech rec (wav2letter), text to speech (WaveRNN) and source separation (ConvTasNet)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "To reiterate, starting PyTorch 1.6, features are now classified as stable, beta and prototype. You can see the detailed announcement [here](https://pytorch.org/blog/pytorch-feature-classification-changes/). Note that the prototype features listed in this blog are available as part of this release. \n\nFind the full release notes [here](https://github.com/pytorch/pytorch/releases). \n\n# Front End APIs\n## [Beta] NumPy Compatible torch.fft module\nFFT-related functionality is commonly used in a variety of scientific fields like signal processing. While PyTorch has historically supported a few FFT-related functions, the 1.7 release adds a new torch.fft module that implements FFT-related functions with the same API as NumPy.\n\nThis new module must be imported to be used in the 1.7 release, since its name conflicts with the historic (and now deprecated) torch.fft function.\n\n**Example usage:**\n```python\n>>> import torch.fft\n>>> t = torch.arange(4)\n>>> t\ntensor([0, 1, 2, 3])", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": ">>> torch.fft.fft(t)\ntensor([ 6.+0.j, -2.+2.j, -2.+0.j, -2.-2.j])\n\n>>> t = tensor([0.+1.j, 2.+3.j, 4.+5.j, 6.+7.j])\n>>> torch.fft.fft(t)\ntensor([12.+16.j, -8.+0.j, -4.-4.j, 0.-8.j])\n ```\n\n* [Documentation](https://pytorch.org/docs/stable/fft.html#torch-fft)\n\n## [Beta] C++ Support for Transformer NN Modules\nSince [PyTorch 1.5](https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/), we\u2019ve continued to maintain parity between the python and C++ frontend APIs. This update allows developers to use the nn.transformer module abstraction from the C++ Frontend. And moreover, developers no longer need to save a module from python/JIT and load into C++ as it can now be used it in C++ directly.\n* [Documentation](https://pytorch.org/cppdocs/api/classtorch_1_1nn_1_1_transformer_impl.html#_CPPv4N5torch2nn15TransformerImplE)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "## [Beta] torch.set_deterministic \nReproducibility (bit-for-bit determinism) may help identify errors when debugging or testing a program. To facilitate reproducibility, PyTorch 1.7 adds the ```torch.set_deterministic(bool)``` function that can direct PyTorch operators to select deterministic algorithms when available, and to throw a runtime error if an operation may result in nondeterministic behavior. By default, the flag this function controls is false and there is no change in behavior, meaning PyTorch may implement its operations nondeterministically by default. \n\nMore precisely, when this flag is true:\n* Operations known to not have a deterministic implementation throw a runtime error;\n* Operations with deterministic variants use those variants (usually with a performance penalty versus the non-deterministic version); and\n* ```torch.backends.cudnn.deterministic = True``` is set.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "Note that this is necessary, **but not sufficient**, for determinism **within a single run of a PyTorch program**. Other sources of randomness like random number generators, unknown operations, or asynchronous or distributed computation may still cause nondeterministic behavior.\n\nSee the documentation for ```torch.set_deterministic(bool)``` for the list of affected operations.\n* [RFC](https://github.com/pytorch/pytorch/issues/15359)\n* [Documentation](https://pytorch.org/docs/stable/generated/torch.set_deterministic.html)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "# Performance & Profiling\n## [Beta] Stack traces added to profiler\nUsers can now see not only operator name/inputs in the profiler output table but also where the operator is in the code. The workflow requires very little change to take advantage of this capability. The user uses the [autograd profiler](https://pytorch.org/docs/stable/autograd.html#profiler) as before but with optional new parameters: ```with_stack``` and ```group_by_stack_n```. Caution: regular profiling runs should not use this feature as it adds significant overhead.\n* [Detail](https://github.com/pytorch/pytorch/pull/43898/)\n* [Documentation](https://pytorch.org/docs/stable/autograd.html)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "# Distributed Training & RPC \n## [Stable] TorchElastic now bundled into PyTorch docker image\nTorchelastic offers a strict superset of the current ```torch.distributed.launch``` CLI with the added features for fault-tolerance and elasticity. If the user is not be interested in fault-tolerance, they can get the exact functionality/behavior parity by setting ```max_restarts=0``` with the added convenience of auto-assigned ```RANK``` and ```MASTER_ADDR|PORT``` (versus manually specified in ```torch.distributed.launch)```.\n\nBy bundling ```torchelastic``` in the same docker image as PyTorch, users can start experimenting with TorchElastic right-away without having to separately install ```torchelastic```. In addition to convenience, this work is a nice-to-have when adding support for elastic parameters in the existing Kubeflow\u2019s distributed PyTorch operators.\n* [Usage examples and how to get started](https://pytorch.org/elastic/0.2.0/examples.html)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "## [Beta] Support for uneven dataset inputs in DDP\nPyTorch 1.7 introduces a new context manager to be used in conjunction with models trained using ```torch.nn.parallel.DistributedDataParallel``` to enable training with uneven dataset size across different processes. This feature enables greater flexibility when using DDP and prevents the user from having to manually ensure dataset sizes are the same across different process. With this context manager, DDP will handle uneven dataset sizes automatically, which can prevent errors or hangs at the end of training.\n* [RFC](https://github.com/pytorch/pytorch/issues/38174)\n* [Documentation](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel.join)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "## [Beta] NCCL Reliability - Async Error/Timeout Handling\nIn the past, NCCL training runs would hang indefinitely due to stuck collectives, leading to a very unpleasant experience for users. This feature will abort stuck collectives and throw an exception/crash the process if a potential hang is detected. When used with something like torchelastic (which can recover the training process from the last checkpoint), users can have much greater reliability for distributed training. This feature is completely opt-in and sits behind an environment variable that needs to be explicitly set in order to enable this functionality (otherwise users will see the same behavior as before).\n* [RFC](https://github.com/pytorch/pytorch/issues/46874)\n* [Documentation](https://pytorch.org/docs/stable/distributed.html?highlight=init_process_group#torch.distributed.init_process_group)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "## [Beta] TorchScript ```rpc_remote``` and ```rpc_sync```\n```torch.distributed.rpc.rpc_async``` has been available in TorchScript in prior releases. For PyTorch 1.7, this functionality will be extended the remaining two core RPC APIs, ```torch.distributed.rpc.rpc_sync``` and ```torch.distributed.rpc.remote```. This will complete the major RPC APIs targeted for support in TorchScript, it allows users to use the existing python RPC APIs within TorchScript (in a script function or script method, which releases the python Global Interpreter Lock) and could possibly improve application performance in multithreaded environment.\n* [Documentation](https://pytorch.org/docs/stable/rpc.html#rpc)\n* [Usage examples](https://github.com/pytorch/pytorch/blob/58ed60c259834e324e86f3e3118e4fcbbfea8dd1/torch/testing/_internal/distributed/rpc/jit/rpc_test.py#L505-L525)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "## [Beta] Distributed optimizer with TorchScript support\nPyTorch provides a broad set of optimizers for training algorithms, and these have been used repeatedly as part of the python API. However, users often want to use multithreaded training instead of multiprocess training as it provides better resource utilization and efficiency in the context of large scale distributed training (e.g. Distributed Model Parallel) or any RPC-based training application). Users couldn\u2019t do this with with distributed optimizer before because we need to get rid of the python Global Interpreter Lock (GIL) limitation to achieve this.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "In PyTorch 1.7, we are enabling the TorchScript support in distributed optimizer to remove the GIL, and make it possible to run optimizer in multithreaded applications. The new distributed optimizer has the exact same interface as before but it automatically converts optimizers within each worker into TorchScript to make each GIL free. This is done by leveraging a functional optimizer concept and allowing the distributed optimizer to convert the computational portion of the optimizer into TorchScript. This will help use cases like distributed model parallel training and improve performance using multithreading.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "Currently, the only optimizer that supports automatic conversion with TorchScript is ```Adagrad``` and all other optimizers will still work as before without TorchScript support. We are working on expanding the coverage to all PyTorch optimizers and expect more to come in future releases. The usage to enable TorchScript support is automatic and exactly the same with existing python APIs, here is an example of how to use this:\n\n```python\nimport torch.distributed.autograd as dist_autograd\nimport torch.distributed.rpc as rpc\nfrom torch import optim\nfrom torch.distributed.optim import DistributedOptimizer\n\nwith dist_autograd.context() as context_id:\n # Forward pass.\n rref1 = rpc.remote(\"worker1\", torch.add, args=(torch.ones(2), 3))\n rref2 = rpc.remote(\"worker1\", torch.add, args=(torch.ones(2), 1))\n loss = rref1.to_here() + rref2.to_here()\n\n # Backward pass.\n dist_autograd.backward(context_id, [loss.sum()])", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "# Optimizer, pass in optim.Adagrad, DistributedOptimizer will\n # automatically convert/compile it to TorchScript (GIL-free)\n dist_optim = DistributedOptimizer(\n optim.Adagrad,\n [rref1, rref2],\n lr=0.05,\n )\n dist_optim.step(context_id)\n ```\n* [RFC](https://github.com/pytorch/pytorch/issues/46883)\n* [Documentation](https://pytorch.org/docs/stable/rpc.html#module-torch.distributed.optim)\n\n## [Beta] Enhancements to RPC-based Profiling\nSupport for using the PyTorch profiler in conjunction with the RPC framework was first introduced in PyTorch 1.6. In PyTorch 1.7, the following enhancements have been made:\n* Implemented better support for profiling TorchScript functions over RPC\n* Achieved parity in terms of profiler features that work with RPC\n* Added support for asynchronous RPC functions on the server-side (functions decorated with ```rpc.functions.async_execution)```.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "Users are now able to use familiar profiling tools such as with ```torch.autograd.profiler.profile()``` and ```with torch.autograd.profiler.record_function```, and this works transparently with the RPC framework with full feature support, profiles asynchronous functions, and TorchScript functions.\n* [Design doc](https://github.com/pytorch/pytorch/issues/39675)\n* [Usage examples](https://pytorch.org/tutorials/recipes/distributed_rpc_profiling.html)\n\n## [Prototype] Windows support for Distributed Training\nPyTorch 1.7 brings prototype support for ```DistributedDataParallel``` and collective communications on the Windows platform. In this release, the support only covers Gloo-based ```ProcessGroup``` and ```FileStore```.\n\nTo use this feature across multiple machines, please provide a file from a shared file system in ```init_process_group```.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "```python\n# initialize the process group\ndist.init_process_group(\n \"gloo\",\n # multi-machine example:\n # init_method = \"file://////{machine}/{share_folder}/file\"\n init_method=\"file:///{your local file path}\",\n rank=rank,\n world_size=world_size\n)\n\nmodel = DistributedDataParallel(local_model, device_ids=[rank])\n```\n* [Design doc](https://github.com/pytorch/pytorch/issues/42095)\n* [Documentation](https://pytorch.org/docs/master/distributed.html#backends-that-come-with-pytorch)\n* Acknowledgement ([gunandrose4u](https://github.com/gunandrose4u))\n\n# Mobile\nPyTorch Mobile supports both [iOS](https://pytorch.org/mobile/ios) and [Android](https://pytorch.org/mobile/android/) with binary packages available in [Cocoapods](https://cocoapods.org/) and [JCenter](https://mvnrepository.com/repos/jcenter) respectively. You can learn more about PyTorch Mobile [here](https://pytorch.org/mobile/home/).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "## [Beta] PyTorch Mobile Caching allocator for performance improvements\nOn some mobile platforms, such as Pixel, we observed that memory is returned to the system more aggressively. This results in frequent page faults as PyTorch being a functional framework does not maintain state for the operators. Thus outputs are allocated dynamically on each execution of the op, for the most ops. To ameliorate performance penalties due to this, PyTorch 1.7 provides a simple caching allocator for CPU. The allocator caches allocations by tensor sizes and, is currently, available only via the PyTorch C++ API. The caching allocator itself is owned by client and thus the lifetime of the allocator is also maintained by client code. Such a client owned caching allocator can then be used with scoped guard, ```c10::WithCPUCachingAllocatorGuard```, to enable the use of cached allocation within that scope.\n**Example usage:**", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "```python\n#include \n.....\nc10::CPUCachingAllocator caching_allocator;\n // Owned by client code. Can be a member of some client class so as to tie the\n // the lifetime of caching allocator to that of the class.\n.....\n{\n c10::optional caching_allocator_guard;\n if (FLAGS_use_caching_allocator) {\n caching_allocator_guard.emplace(&caching_allocator);\n }\n ....\n model.forward(..);\n}\n...\n```\n**NOTE**: Caching allocator is only available on mobile builds, thus the use of caching allocator outside of mobile builds won\u2019t be effective.\n* [Documentation](https://github.com/pytorch/pytorch/blob/master/c10/mobile/CPUCachingAllocator.h#L13-L43)\n* [Usage examples](https://github.com/pytorch/pytorch/blob/master/binaries/speed_benchmark_torch.cc#L207)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "# torchvision\n## [Stable] Transforms now support Tensor inputs, batch computation, GPU, and TorchScript\ntorchvision transforms are now inherited from ```nn.Module``` and can be torchscripted and applied on torch Tensor inputs as well as on PIL images. They also support Tensors with batch dimensions and work seamlessly on CPU/GPU devices:\n```python\nimport torch\nimport torchvision.transforms as T\n\n# to fix random seed, use torch.manual_seed\n# instead of random.seed\ntorch.manual_seed(12)\n\ntransforms = torch.nn.Sequential(\n T.RandomCrop(224),\n T.RandomHorizontalFlip(p=0.3),\n T.ConvertImageDtype(torch.float),\n T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n)\nscripted_transforms = torch.jit.script(transforms)\n# Note: we can similarly use T.Compose to define transforms\n# transforms = T.Compose([...]) and \n# scripted_transforms = torch.jit.script(torch.nn.Sequential(*transforms.transforms))", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "tensor_image = torch.randint(0, 256, size=(3, 256, 256), dtype=torch.uint8)\n# works directly on Tensors\nout_image1 = transforms(tensor_image)\n# on the GPU\nout_image1_cuda = transforms(tensor_image.cuda())\n# with batches\nbatched_image = torch.randint(0, 256, size=(4, 3, 256, 256), dtype=torch.uint8)\nout_image_batched = transforms(batched_image)\n# and has torchscript support\nout_image2 = scripted_transforms(tensor_image)\n```\nThese improvements enable the following new features:\n* support for GPU acceleration\n* batched transformations e.g. as needed for videos\n* transform multi-band torch tensor images (with more than 3-4 channels)\n* torchscript transforms together with your model for deployment\n**Note:** Exceptions for TorchScript support includes ```Compose```, ```RandomChoice```, ```RandomOrder```, ```Lambda``` and those applied on PIL images, such as ```ToPILImage```.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "## [Stable] Native image IO for JPEG and PNG formats\ntorchvision 0.8.0 introduces native image reading and writing operations for JPEG and PNG formats. Those operators support TorchScript and return ```CxHxW``` tensors in ```uint8``` format, and can thus be now part of your model for deployment in C++ environments.\n```python\nfrom torchvision.io import read_image\n\n# tensor_image is a CxHxW uint8 Tensor\ntensor_image = read_image('path_to_image.jpeg')\n\n# or equivalently\nfrom torchvision.io import read_file, decode_image\n# raw_data is a 1d uint8 Tensor with the raw bytes\nraw_data = read_file('path_to_image.jpeg')\ntensor_image = decode_image(raw_data)\n\n# all operators are torchscriptable and can be\n# serialized together with your model torchscript code\nscripted_read_image = torch.jit.script(read_image)\n```\n## [Stable] RetinaNet detection model\nThis release adds pretrained models for RetinaNet with a ResNet50 backbone from [Focal Loss for Dense Object Detection](https://arxiv.org/abs/1708.02002).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "## [Beta] New Video Reader API\nThis release introduces a new video reading abstraction, which gives more fine-grained control of iteration over videos. It supports image and audio, and implements an iterator interface so that it is interoperable with other the python libraries such as itertools.\n```python\nfrom torchvision.io import VideoReader\n\n# stream indicates if reading from audio or video\nreader = VideoReader('path_to_video.mp4', stream='video')\n# can change the stream after construction\n# via reader.set_current_stream\n\n# to read all frames in a video starting at 2 seconds\nfor frame in reader.seek(2):\n # frame is a dict with \"data\" and \"pts\" metadata\n print(frame[\"data\"], frame[\"pts\"])", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "# because reader is an iterator you can combine it with\n# itertools\nfrom itertools import takewhile, islice\n# read 10 frames starting from 2 seconds\nfor frame in islice(reader.seek(2), 10):\n pass\n \n# or to return all frames between 2 and 5 seconds\nfor frame in takewhile(lambda x: x[\"pts\"] < 5, reader):\n pass\n```\n**Notes:**\n* In order to use the Video Reader API beta, you must compile torchvision from source and have ffmpeg installed in your system.\n* The VideoReader API is currently released as beta and its API may change following user feedback.\n\n# torchaudio\nWith this release, torchaudio is expanding its support for models and [end-to-end applications](https://github.com/pytorch/audio/tree/master/examples), adding a wav2letter training pipeline and end-to-end text-to-speech and source separation pipelines. Please file an issue on [github](https://github.com/pytorch/audio/issues/new?template=questions-help-support.md) to provide feedback on them.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "## [Stable] Speech Recognition\nBuilding on the addition of the wav2letter model for speech recognition in the last release, we\u2019ve now added an [example wav2letter training pipeline](https://github.com/pytorch/audio/tree/master/examples/pipeline_wav2letter) with the LibriSpeech dataset.\n\n## [Stable] Text-to-speech\nWith the goal of supporting text-to-speech applications, we added a vocoder based on the WaveRNN model, based on the implementation from [this repository](https://github.com/fatchord/WaveRNN). The original implementation was introduced in \"Efficient Neural Audio Synthesis\". We also provide an [example WaveRNN training pipeline](https://github.com/pytorch/audio/tree/master/examples/pipeline_wavernn) that uses the LibriTTS dataset added to torchaudio in this release.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "## [Stable] Source Separation\nWith the addition of the ConvTasNet model, based on the paper \"Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude Masking for Speech Separation,\" torchaudio now also supports source separation. An [example ConvTasNet training pipeline](https://github.com/pytorch/audio/tree/master/examples/source_separation) is provided with the wsj-mix dataset.\n\nCheers!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'Adding a Contributor License Agreement for PyTorch'\nauthor: Team PyTorch\n---\n\nTo ensure the ongoing growth and success of the framework, we're introducing the use of the Apache Contributor License Agreement (CLA) for PyTorch. We care deeply about the broad community of contributors who make PyTorch such a great framework, so we want to take a moment to explain why we are adding a CLA.\n\n#### Why Does PyTorch Need a CLA?\n\nCLAs help clarify that users and maintainers have the relevant rights to use and maintain code contributed to an open source project, while allowing contributors to retain ownership rights to their code.", "metadata": {"source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"}}
{"page_content": "PyTorch has grown from a small group of enthusiasts to a now global community with over 1,600 contributors from dozens of countries, each bringing their own diverse perspectives, values and approaches to collaboration. Looking forward, clarity about how this collaboration is happening is an important milestone for the framework as we continue to build a stronger, safer and more scalable community around PyTorch.", "metadata": {"source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "The text of the Apache CLA can be found [here](https://www.apache.org/licenses/contributor-agreements.html), together with an accompanying [FAQ](https://www.apache.org/licenses/cla-faq.html). The language in the PyTorch CLA is identical to the Apache template. Although CLAs have been the subject of significant discussion in the open source community, we are seeing that using a CLA, and particularly the Apache CLA, is now standard practice when projects and communities reach a certain scale. Popular projects", "metadata": {"source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "that have adopted some type of CLA include: Visual Studio Code, Flutter, TensorFlow, kubernetes, Ubuntu, Django, Python, Go, Android and many others.", "metadata": {"source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "What is Not Changing\n\nPyTorch\u2019s BSD license is **not** changing. There is no impact to PyTorch users. CLAs will only be required for new contributions to the project. For past contributions, no action is necessary. Everything else stays the same, whether it\u2019s IP ownership, workflows, contributor roles or anything else that you\u2019ve come to expect from PyTorch.", "metadata": {"source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "How the New CLA will Work\n\nMoving forward, all contributors to projects under the PyTorch GitHub organization will need to sign a CLA to merge their contributions. \n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "If you've contributed to other Facebook Open Source projects, you may have already signed the CLA, and no action is required. If you have not signed the CLA, a GitHub check will prompt you to sign it before your pull requests can be merged. You can reach the CLA from this [link](https://code.facebook.com/cla).\n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "If you're contributing as an individual, meaning the code is not something you worked on as part of your job, you should sign the individual contributor agreement. This agreement associates your GitHub username with future contributions and only needs to be signed once.", "metadata": {"source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "If you're contributing as part of your employment, you may need to sign the [corporate contributor agreement](https://code.facebook.com/cla/corporate). Check with your legal team on filling this out. Also you will include a list of github ids from your company.\n\nAs always, we continue to be humbled and grateful for all your support, and we look forward to scaling PyTorch together to even greater heights in the years to come.\n\nThank you!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'Feature Extraction in TorchVision using Torch FX'\nauthor: Alexander Soare and Francisco Massa\nfeatured-img: 'assets/images/fx-image2.png'\n---\n\n\n\n# Introduction", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "[FX](https://pytorch.org/docs/stable/fx.html) based feature extraction is a new [TorchVision utility](https://pytorch.org/vision/stable/feature_extraction.html) that lets us access intermediate transformations of an input during the forward pass of a PyTorch Module. It does so by symbolically tracing the forward method to produce a graph where each node represents a single operation. Nodes are named in a human-readable manner such that one may easily specify which nodes they want to access.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "Did that all sound a little complicated? Not to worry as there\u2019s a little in this article for everyone. Whether you\u2019re a beginner or an advanced deep-vision practitioner, chances are you will want to know about FX feature extraction. If you still want more background on feature extraction in general, read on. If you\u2019re already comfortable with that and want to know how to do it in PyTorch, skim ahead to Existing Methods in PyTorch: Pros and Cons. And if you already know about the challenges of doing feature", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "extraction in PyTorch, feel free to skim forward to FX to The Rescue.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "A Recap On Feature Extraction\n\nWe\u2019re all used to the idea of having a deep neural network (DNN) that takes inputs and produces outputs, and we don\u2019t necessarily think of what happens in between. Let\u2019s just consider a ResNet-50 classification model as an example:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "\n\t
\n\t
\n\t\tFigure 1: ResNet-50 takes an image of a bird and transforms that into the abstract concept \"bird\". Source: Bird image from ImageNet.\n
", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "We know though, that there are many sequential \u201clayers\u201d within the ResNet-50 architecture that transform the input step-by-step. In Figure 2 below, we peek under the hood to show the layers within ResNet-50, and we also show the intermediate transformations of the input as it passes through those layers.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "\n\t
\n\t
\n\t\tFigure 2: ResNet-50 transforms the input image in multiple steps. Conceptually, we may access the intermediate transformation of the image after each one of these steps. Source: Bird image from ImageNet.\n
", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "Existing Methods In PyTorch: Pros and Cons\n\nThere were already a few ways of doing feature extraction in PyTorch prior to FX based feature extraction being introduced.\n\nTo illustrate these, let\u2019s consider a simple convolutional neural network that does the following\n\n* Applies several \u201cblocks\u201d each with several convolution layers within.\n* After several blocks, it uses a global average pool and flatten operation.\n* Finally it uses a single output classification layer.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "```python\nimport torch\nfrom torch import nn\n\n\nclass ConvBlock(nn.Module):\n \"\"\"\n Applies `num_layers` 3x3 convolutions each followed by ReLU then downsamples\n via 2x2 max pool.\n \"\"\"", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "def __init__(self, num_layers, in_channels, out_channels):\n super().__init__()\n self.convs = nn.ModuleList(\n [nn.Sequential(\n nn.Conv2d(in_channels if i==0 else out_channels, out_channels, 3, padding=1),\n nn.ReLU()\n )\n for i in range(num_layers)]\n )\n self.downsample = nn.MaxPool2d(kernel_size=2, stride=2)\n \n def forward(self, x):\n for conv in self.convs:\n x = conv(x)\n x = self.downsample(x)", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "return x", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "class CNN(nn.Module):\n \"\"\"\n Applies several ConvBlocks each doubling the number of channels, and\n halving the feature map size, before taking a global average and classifying.\n \"\"\"", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "def __init__(self, in_channels, num_blocks, num_classes):\n super().__init__()\n first_channels = 64\n self.blocks = nn.ModuleList(\n [ConvBlock(\n 2 if i==0 else 3,\n in_channels=(in_channels if i == 0 else first_channels*(2**(i-1))),\n out_channels=first_channels*(2**i))\n for i in range(num_blocks)]\n )\n self.global_pool = nn.AdaptiveAvgPool2d((1, 1))", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "self.cls = nn.Linear(first_channels*(2**(num_blocks-1)), num_classes)", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "def forward(self, x):\n for block in self.blocks:\n x = block(x)\n x = self.global_pool(x)\n x = x.flatten(1)\n x = self.cls(x)\n return x\n\n\nmodel = CNN(3, 4, 10)\nout = model(torch.zeros(1, 3, 32, 32)) # This will be the final logits over classes", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "Let\u2019s say we want to get the final feature map before global average pooling. We could do the following:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "Modify the forward method\n\n```python\ndef forward(self, x):\n for block in self.blocks:\n x = block(x)\n self.final_feature_map = x\n x = self.global_pool(x)\n x = x.flatten(1)\n x = self.cls(x)\n return x", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "Or return it directly:\n\n```python\ndef forward(self, x):\n for block in self.blocks:\n x = block(x)\n final_feature_map = x\n x = self.global_pool(x)\n x = x.flatten(1)\n x = self.cls(x)\n return x, final_feature_map\n```\nThat looks pretty easy. But there are some downsides here which all stem from the same underlying issue: that is, modifying the source code is not ideal:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "* It\u2019s not always easy to access and change given the practical considerations of a project.\n* If we want flexibility (switching feature extraction on or off, or having variations on it), we need to further adapt the source code to support that.\n* It\u2019s not always just a question of inserting a single line of code. Think about how you would go about getting the feature map from one of the intermediate blocks with the way I\u2019ve written this module.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "* Overall, we\u2019d rather avoid the overhead of maintaining source code for a model, when we actually don\u2019t need to change anything about how it works.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "One can see how this downside can start to get a lot more thorny when dealing with larger, more complicated models, and trying to get at features from within nested submodules.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "Write a new module using the parameters from the original one\n\nFollowing on the example from above, say we want to get a feature map from each block. We could write a new module like so:\n\n```python\nclass CNNFeatures(nn.Module):\n def __init__(self, backbone):\n super().__init__()\n self.blocks = backbone.blocks\n\n def forward(self, x):\n feature_maps = []\n for block in self.blocks:\n x = block(x)\n feature_maps.append(x)\n return feature_maps", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "backbone = CNN(3, 4, 10)\nmodel = CNNFeatures(backbone)\nout = model(torch.zeros(1, 3, 32, 32)) # This is now a list of Tensors, each representing a feature map", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "In fact, this is much like the method that TorchVision used internally to make many of its detection models. \n\nAlthough this approach solves some of the issues with modifying the source code directly, there are still some major downsides:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "* It\u2019s only really straight-forward to access the outputs of top-level submodules. Dealing with nested submodules rapidly becomes complicated.\n* We have to be careful not to miss any important operations in between the input and the output. We introduce potential for errors in transcribing the exact functionality of the original module to the new module.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "Overall, this method and the last both have the complication of tying in feature extraction with the model\u2019s source code itself. Indeed, if we examine the source code for TorchVision models we might suspect that some of the design choices were influenced by the desire to use them in this way for downstream tasks.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "Use hooks\n\nHooks move us away from the paradigm of writing source code, towards one of specifying outputs. Considering our toy CNN example above, and the goal of getting feature maps for each layer, we could use hooks like this:\n\n\n```python\nmodel = CNN(3, 4, 10)\nfeature_maps = [] # This will be a list of Tensors, each representing a feature map\n\ndef hook_feat_map(mod, inp, out):\n\tfeature_maps.append(out)\n\nfor block in model.blocks:\n\tblock.register_forward_hook(hook_feat_map)", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "out = model(torch.zeros(1, 3, 32, 32)) # This will be the final logits over classes", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "Now we have full flexibility in terms of accessing nested submodules, and we free ourselves of the responsibilities of fiddling with the source code. But this approach comes with its own downsides:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "* We can only apply hooks to modules. If we have functional operations (reshape, view, functional non-linearities, etc) for which we want the outputs, hooks won\u2019t work directly on them.\n* We have not modified anything about the source code, so the whole forward pass is executed, regardless of the hooks. If we only need to access early features without any need for the final output, this could result in a lot of useless computation.\n* Hooks are not TorchScript friendly.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "The text of the Apache CLA can be found [here](https://www.apache.org/licenses/contributor-agreements.html), together with an accompanying [FAQ](https://www.apache.org/licenses/cla-faq.html). The language in the PyTorch CLA is identical to the Apache template. Although CLAs have been the subject of significant discussion in the open source community, we are seeing that using a CLA, and particularly the Apache CLA, is now standard practice when projects and communities reach a certain scale. Popular projects that have adopted some type of CLA include: Visual Studio Code, Flutter, TensorFlow, kubernetes, Ubuntu, Django, Python, Go, Android and many others.\n\n#### What is Not Changing", "metadata": {"source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "PyTorch\u2019s BSD license is **not** changing. There is no impact to PyTorch users. CLAs will only be required for new contributions to the project. For past contributions, no action is necessary. Everything else stays the same, whether it\u2019s IP ownership, workflows, contributor roles or anything else that you\u2019ve come to expect from PyTorch. \n\n#### How the New CLA will Work\n\nMoving forward, all contributors to projects under the PyTorch GitHub organization will need to sign a CLA to merge their contributions. \n\n\n

\n
\n\nIf you've contributed to other Facebook Open Source projects, you may have already signed the CLA, and no action is required. If you have not signed the CLA, a GitHub check will prompt you to sign it before your pull requests can be merged. You can reach the CLA from this [link](https://code.facebook.com/cla).\n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "If you're contributing as an individual, meaning the code is not something you worked on as part of your job, you should sign the individual contributor agreement. This agreement associates your GitHub username with future contributions and only needs to be signed once.\n\nIf you're contributing as part of your employment, you may need to sign the [corporate contributor agreement](https://code.facebook.com/cla/corporate). Check with your legal team on filling this out. Also you will include a list of github ids from your company.\n\nAs always, we continue to be humbled and grateful for all your support, and we look forward to scaling PyTorch together to even greater heights in the years to come.\n\nThank you!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'Feature Extraction in TorchVision using Torch FX'\nauthor: Alexander Soare and Francisco Massa\nfeatured-img: 'assets/images/fx-image2.png'\n---\n\n\n\n# Introduction\n\n[FX](https://pytorch.org/docs/stable/fx.html) based feature extraction is a new [TorchVision utility](https://pytorch.org/vision/stable/feature_extraction.html) that lets us access intermediate transformations of an input during the forward pass of a PyTorch Module. It does so by symbolically tracing the forward method to produce a graph where each node represents a single operation. Nodes are named in a human-readable manner such that one may easily specify which nodes they want to access.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "Did that all sound a little complicated? Not to worry as there\u2019s a little in this article for everyone. Whether you\u2019re a beginner or an advanced deep-vision practitioner, chances are you will want to know about FX feature extraction. If you still want more background on feature extraction in general, read on. If you\u2019re already comfortable with that and want to know how to do it in PyTorch, skim ahead to Existing Methods in PyTorch: Pros and Cons. And if you already know about the challenges of doing feature extraction in PyTorch, feel free to skim forward to FX to The Rescue.\n\n\n## A Recap On Feature Extraction\n\nWe\u2019re all used to the idea of having a deep neural network (DNN) that takes inputs and produces outputs, and we don\u2019t necessarily think of what happens in between. Let\u2019s just consider a ResNet-50 classification model as an example:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "\n\t
\n\t
\n\t\tFigure 1: ResNet-50 takes an image of a bird and transforms that into the abstract concept \"bird\". Source: Bird image from ImageNet.\n
\n\nWe know though, that there are many sequential \u201clayers\u201d within the ResNet-50 architecture that transform the input step-by-step. In Figure 2 below, we peek under the hood to show the layers within ResNet-50, and we also show the intermediate transformations of the input as it passes through those layers.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "\n\t
\n\t
\n\t\tFigure 2: ResNet-50 transforms the input image in multiple steps. Conceptually, we may access the intermediate transformation of the image after each one of these steps. Source: Bird image from ImageNet.\n
\n\n\n## Existing Methods In PyTorch: Pros and Cons\n\nThere were already a few ways of doing feature extraction in PyTorch prior to FX based feature extraction being introduced.\n\nTo illustrate these, let\u2019s consider a simple convolutional neural network that does the following\n\n* Applies several \u201cblocks\u201d each with several convolution layers within.\n* After several blocks, it uses a global average pool and flatten operation.\n* Finally it uses a single output classification layer.\n\n```python\nimport torch\nfrom torch import nn", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "class ConvBlock(nn.Module):\n \"\"\"\n Applies `num_layers` 3x3 convolutions each followed by ReLU then downsamples\n via 2x2 max pool.\n \"\"\"\n\n def __init__(self, num_layers, in_channels, out_channels):\n super().__init__()\n self.convs = nn.ModuleList(\n [nn.Sequential(\n nn.Conv2d(in_channels if i==0 else out_channels, out_channels, 3, padding=1),\n nn.ReLU()\n )\n for i in range(num_layers)]\n )\n self.downsample = nn.MaxPool2d(kernel_size=2, stride=2)\n \n def forward(self, x):\n for conv in self.convs:\n x = conv(x)\n x = self.downsample(x)\n return x\n \n\nclass CNN(nn.Module):\n \"\"\"\n Applies several ConvBlocks each doubling the number of channels, and\n halving the feature map size, before taking a global average and classifying.\n \"\"\"", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "def __init__(self, in_channels, num_blocks, num_classes):\n super().__init__()\n first_channels = 64\n self.blocks = nn.ModuleList(\n [ConvBlock(\n 2 if i==0 else 3,\n in_channels=(in_channels if i == 0 else first_channels*(2**(i-1))),\n out_channels=first_channels*(2**i))\n for i in range(num_blocks)]\n )\n self.global_pool = nn.AdaptiveAvgPool2d((1, 1))\n self.cls = nn.Linear(first_channels*(2**(num_blocks-1)), num_classes)\n\n def forward(self, x):\n for block in self.blocks:\n x = block(x)\n x = self.global_pool(x)\n x = x.flatten(1)\n x = self.cls(x)\n return x\n\n\nmodel = CNN(3, 4, 10)\nout = model(torch.zeros(1, 3, 32, 32)) # This will be the final logits over classes\n\n```\n\nLet\u2019s say we want to get the final feature map before global average pooling. We could do the following:\n\n### Modify the forward method", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "```python\ndef forward(self, x):\n for block in self.blocks:\n x = block(x)\n self.final_feature_map = x\n x = self.global_pool(x)\n x = x.flatten(1)\n x = self.cls(x)\n return x\n```\n\nOr return it directly:\n\n```python\ndef forward(self, x):\n for block in self.blocks:\n x = block(x)\n final_feature_map = x\n x = self.global_pool(x)\n x = x.flatten(1)\n x = self.cls(x)\n return x, final_feature_map\n```\nThat looks pretty easy. But there are some downsides here which all stem from the same underlying issue: that is, modifying the source code is not ideal:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "* It\u2019s not always easy to access and change given the practical considerations of a project.\n* If we want flexibility (switching feature extraction on or off, or having variations on it), we need to further adapt the source code to support that.\n* It\u2019s not always just a question of inserting a single line of code. Think about how you would go about getting the feature map from one of the intermediate blocks with the way I\u2019ve written this module.\n* Overall, we\u2019d rather avoid the overhead of maintaining source code for a model, when we actually don\u2019t need to change anything about how it works.\n\nOne can see how this downside can start to get a lot more thorny when dealing with larger, more complicated models, and trying to get at features from within nested submodules.\n\n### Write a new module using the parameters from the original one\n\nFollowing on the example from above, say we want to get a feature map from each block. We could write a new module like so:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "```python\nclass CNNFeatures(nn.Module):\n def __init__(self, backbone):\n super().__init__()\n self.blocks = backbone.blocks\n\n def forward(self, x):\n feature_maps = []\n for block in self.blocks:\n x = block(x)\n feature_maps.append(x)\n return feature_maps\n\n\nbackbone = CNN(3, 4, 10)\nmodel = CNNFeatures(backbone)\nout = model(torch.zeros(1, 3, 32, 32)) # This is now a list of Tensors, each representing a feature map\n```\n\nIn fact, this is much like the method that TorchVision used internally to make many of its detection models. \n\nAlthough this approach solves some of the issues with modifying the source code directly, there are still some major downsides:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "* It\u2019s only really straight-forward to access the outputs of top-level submodules. Dealing with nested submodules rapidly becomes complicated.\n* We have to be careful not to miss any important operations in between the input and the output. We introduce potential for errors in transcribing the exact functionality of the original module to the new module.\n\nOverall, this method and the last both have the complication of tying in feature extraction with the model\u2019s source code itself. Indeed, if we examine the source code for TorchVision models we might suspect that some of the design choices were influenced by the desire to use them in this way for downstream tasks.\n\n### Use hooks\n\nHooks move us away from the paradigm of writing source code, towards one of specifying outputs. Considering our toy CNN example above, and the goal of getting feature maps for each layer, we could use hooks like this:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "```python\nmodel = CNN(3, 4, 10)\nfeature_maps = [] # This will be a list of Tensors, each representing a feature map\n\ndef hook_feat_map(mod, inp, out):\n\tfeature_maps.append(out)\n\nfor block in model.blocks:\n\tblock.register_forward_hook(hook_feat_map)\n\nout = model(torch.zeros(1, 3, 32, 32)) # This will be the final logits over classes\n```\n\nNow we have full flexibility in terms of accessing nested submodules, and we free ourselves of the responsibilities of fiddling with the source code. But this approach comes with its own downsides:\n\n* We can only apply hooks to modules. If we have functional operations (reshape, view, functional non-linearities, etc) for which we want the outputs, hooks won\u2019t work directly on them.\n* We have not modified anything about the source code, so the whole forward pass is executed, regardless of the hooks. If we only need to access early features without any need for the final output, this could result in a lot of useless computation.\n* Hooks are not TorchScript friendly.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
{"page_content": "Here\u2019s a summary of the different methods and their pros/cons:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "| | Can use source code as is without any modifications or rewriting | Full flexibility in accessing features | Drops unnecessary computational steps | TorchScript friendly |", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "| Modify forward method | NO | Technically yes. Depends on how much code you\u2019re willing to write. So in practice, NO. | YES | YES |", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "| New module that reuses submodules / parameters of original module | NO | Technically yes. Depends on how much code you\u2019re willing to write. So in practice, NO. | YES | YES |", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "| Hooks | YES | Mostly YES. Only outputs of submodules | NO | NO |", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "Table 1: The pros (or cons) of some of the existing methods for feature extraction with PyTorch\n\nIn the next section of this article, let\u2019s see how we can get YES across the board.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "FX to The Rescue\n\nThe natural question for some new-starters in Python and coding at this point might be: *\u201cCan\u2019t we just point to a line of code and tell Python or PyTorch that we want the result of that line?\u201d* For those who have spent more time coding, the reason this can\u2019t be done is clear: multiple operations can happen in one line of code, whether they are explicitly written there, or they are implicit as sub-operations. Just take this simple module as an example:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "```python\nclass MyModule(torch.nn.Module):\n def __init__(self):\n super().__init__()\n self.param = torch.nn.Parameter(torch.rand(3, 4))\n self.submodule = MySubModule()\n\n def forward(self, x):\n return self.submodule(x + self.param).clamp(min=0.0, max=1.0)", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "The forward method has a single line of code which we can unravel as:\n\n1. Add `self.param` to `x`\n2. Pass x through self.submodule. Here we would need to consider the steps happening in that submodule. I\u2019m just going to use dummy operation names for illustration:\n\tI. submodule.op_1\n\tII. submodule.op_2\n3. Apply the clamp operation\n\nSo even if we point at this one line, the question then is: \u201cFor which step do we want to extract the output?\u201d.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "[FX](https://pytorch.org/docs/stable/fx.html) is a core PyTorch toolkit that (oversimplifying) does the unravelling I just mentioned. It does something called \u201csymbolic tracing\u201d, which means the Python code is interpreted and stepped through, operation-by-operation, using some dummy proxy for a real input. Introducing some nomenclature, each step as described above is considered a **\u201cnode\u201d**, and consecutive nodes are connected to one another to form a **\u201cgraph\u201d** (not unlike the common mathematical notion", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "of a graph). Here are the \u201csteps\u201d above translated to this concept of a graph.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "\n\t
\n\t
\n\t\tFigure 3: Graphical representation of the result of symbolically tracing our example of a simple forward method.\n
", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "Note that we call this a graph, and not just a set of steps, because it\u2019s possible for the graph to branch off and recombine. Think of the skip connection in a residual block. This would look something like:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "\n\t
\n\t
\n\t\tFigure 4: Graphical representation of a residual skip connection. The middle node is like the main branch of a residual block, and the final node represents the sum of the input and output of the main branch.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "
", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "Now, TorchVision\u2019s **[get_graph_node_names](https://pytorch.org/vision/stable/feature_extraction.html#torchvision.models.feature_extraction.get_graph_node_names)** function applies FX as described above, and in the process of doing so, tags each node with a human readable name. Let\u2019s try this with our toy CNN model from the previous section:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "```python\nmodel = CNN(3, 4, 10)\nfrom torchvision.models.feature_extraction import get_graph_node_names\nnodes, _ = get_graph_node_names(model)\nprint(nodes)\n```\nwhich will result in:\n```python", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "['x', 'blocks.0.convs.0.0', 'blocks.0.convs.0.1', 'blocks.0.convs.1.0', 'blocks.0.convs.1.1', 'blocks.0.downsample', 'blocks.1.convs.0.0', 'blocks.1.convs.0.1', 'blocks.1.convs.1.0', 'blocks.1.convs.1.1', 'blocks.1.convs.2.0', 'blocks.1.convs.2.1', 'blocks.1.downsample', 'blocks.2.convs.0.0', 'blocks.2.convs.0.1', 'blocks.2.convs.1.0', 'blocks.2.convs.1.1', 'blocks.2.convs.2.0', 'blocks.2.convs.2.1', 'blocks.2.downsample', 'blocks.3.convs.0.0', 'blocks.3.convs.0.1', 'blocks.3.convs.1.0',", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "'blocks.3.convs.1.1', 'blocks.3.convs.2.0', 'blocks.3.convs.2.1', 'blocks.3.downsample', 'global_pool', 'flatten', 'cls']", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "We can read these node names as hierarchically organised \u201caddresses\u201d for the operations of interest. For example 'blocks.1.downsample' refers to the MaxPool2d layer in the second `ConvBlock`.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "[`create_feature_extractor`](https://pytorch.org/vision/stable/feature_extraction.html#torchvision.models.feature_extraction.create_feature_extractor), which is where all the magic happens, goes a few steps further than **`get_graph_node_names`**. It takes desired node names as one of the input arguments, and then uses more FX core functionality to:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "1. Assign the desired nodes as outputs.\n2. Prune unnecessary downstream nodes and their associated parameters.\n3. Translate the resulting graph back into Python code.\n4. Return another PyTorch Module to the user. This has the python code from step 3 as the forward method.\n\nAs a demonstration, here\u2019s how we would apply `create_feature_extractor` to get the 4 feature maps from our toy CNN model", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "```python\nfrom torchvision.models.feature_extraction import create_feature_extractor\n# Confused about the node specification here?\n# We are allowed to provide truncated node names, and `create_feature_extractor`\n# will choose the last node with that prefix.\nfeature_extractor = create_feature_extractor(\n\tmodel, return_nodes=['blocks.0', 'blocks.1', 'blocks.2', 'blocks.3'])\n# `out` will be a dict of Tensors, each representing a feature map\nout = feature_extractor(torch.zeros(1, 3, 32, 32))", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "It\u2019s as simple as that. When it comes down to it, FX feature extraction is just a way of making it possible to do what some of us would have naively hoped for when we first started programming: *\u201cjust give me the output of this code (*points finger at screen)\u201d*.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "| | Can use source code as is without any modifications or rewriting | Full flexibility in accessing features | Drops unnecessary computational steps | TorchScript friendly |\n|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|\n| Modify forward method | NO | Technically yes. Depends on how much code you\u2019re willing to write. So in practice, NO. | YES | YES | \n|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|\n| New module that reuses submodules / parameters of original module | NO | Technically yes. Depends on how much code you\u2019re willing to write. So in practice, NO. | YES | YES |\n|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|\n| Hooks | YES | Mostly YES. Only outputs of submodules | NO | NO |\n|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "Table 1: The pros (or cons) of some of the existing methods for feature extraction with PyTorch\n\nIn the next section of this article, let\u2019s see how we can get YES across the board.\n\n\n## FX to The Rescue\n\nThe natural question for some new-starters in Python and coding at this point might be: *\u201cCan\u2019t we just point to a line of code and tell Python or PyTorch that we want the result of that line?\u201d* For those who have spent more time coding, the reason this can\u2019t be done is clear: multiple operations can happen in one line of code, whether they are explicitly written there, or they are implicit as sub-operations. Just take this simple module as an example:\n\n```python\nclass MyModule(torch.nn.Module):\n def __init__(self):\n super().__init__()\n self.param = torch.nn.Parameter(torch.rand(3, 4))\n self.submodule = MySubModule()\n\n def forward(self, x):\n return self.submodule(x + self.param).clamp(min=0.0, max=1.0)\n```\n\nThe forward method has a single line of code which we can unravel as:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "1. Add `self.param` to `x`\n2. Pass x through self.submodule. Here we would need to consider the steps happening in that submodule. I\u2019m just going to use dummy operation names for illustration:\n\tI. submodule.op_1\n\tII. submodule.op_2\n3. Apply the clamp operation\n\nSo even if we point at this one line, the question then is: \u201cFor which step do we want to extract the output?\u201d.\n\n[FX](https://pytorch.org/docs/stable/fx.html) is a core PyTorch toolkit that (oversimplifying) does the unravelling I just mentioned. It does something called \u201csymbolic tracing\u201d, which means the Python code is interpreted and stepped through, operation-by-operation, using some dummy proxy for a real input. Introducing some nomenclature, each step as described above is considered a **\u201cnode\u201d**, and consecutive nodes are connected to one another to form a **\u201cgraph\u201d** (not unlike the common mathematical notion of a graph). Here are the \u201csteps\u201d above translated to this concept of a graph.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "\n\t
\n\t
\n\t\tFigure 3: Graphical representation of the result of symbolically tracing our example of a simple forward method.\n
\n\nNote that we call this a graph, and not just a set of steps, because it\u2019s possible for the graph to branch off and recombine. Think of the skip connection in a residual block. This would look something like:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "\n\t
\n\t
\n\t\tFigure 4: Graphical representation of a residual skip connection. The middle node is like the main branch of a residual block, and the final node represents the sum of the input and output of the main branch.\n
\n\nNow, TorchVision\u2019s **[get_graph_node_names](https://pytorch.org/vision/stable/feature_extraction.html#torchvision.models.feature_extraction.get_graph_node_names)** function applies FX as described above, and in the process of doing so, tags each node with a human readable name. Let\u2019s try this with our toy CNN model from the previous section:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "```python\nmodel = CNN(3, 4, 10)\nfrom torchvision.models.feature_extraction import get_graph_node_names\nnodes, _ = get_graph_node_names(model)\nprint(nodes)\n```\nwhich will result in:\n```python\n['x', 'blocks.0.convs.0.0', 'blocks.0.convs.0.1', 'blocks.0.convs.1.0', 'blocks.0.convs.1.1', 'blocks.0.downsample', 'blocks.1.convs.0.0', 'blocks.1.convs.0.1', 'blocks.1.convs.1.0', 'blocks.1.convs.1.1', 'blocks.1.convs.2.0', 'blocks.1.convs.2.1', 'blocks.1.downsample', 'blocks.2.convs.0.0', 'blocks.2.convs.0.1', 'blocks.2.convs.1.0', 'blocks.2.convs.1.1', 'blocks.2.convs.2.0', 'blocks.2.convs.2.1', 'blocks.2.downsample', 'blocks.3.convs.0.0', 'blocks.3.convs.0.1', 'blocks.3.convs.1.0', 'blocks.3.convs.1.1', 'blocks.3.convs.2.0', 'blocks.3.convs.2.1', 'blocks.3.downsample', 'global_pool', 'flatten', 'cls']\n```\n\nWe can read these node names as hierarchically organised \u201caddresses\u201d for the operations of interest. For example 'blocks.1.downsample' refers to the MaxPool2d layer in the second `ConvBlock`.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "[`create_feature_extractor`](https://pytorch.org/vision/stable/feature_extraction.html#torchvision.models.feature_extraction.create_feature_extractor), which is where all the magic happens, goes a few steps further than **`get_graph_node_names`**. It takes desired node names as one of the input arguments, and then uses more FX core functionality to:\n\n1. Assign the desired nodes as outputs.\n2. Prune unnecessary downstream nodes and their associated parameters.\n3. Translate the resulting graph back into Python code.\n4. Return another PyTorch Module to the user. This has the python code from step 3 as the forward method.\n\nAs a demonstration, here\u2019s how we would apply `create_feature_extractor` to get the 4 feature maps from our toy CNN model", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "```python\nfrom torchvision.models.feature_extraction import create_feature_extractor\n# Confused about the node specification here?\n# We are allowed to provide truncated node names, and `create_feature_extractor`\n# will choose the last node with that prefix.\nfeature_extractor = create_feature_extractor(\n\tmodel, return_nodes=['blocks.0', 'blocks.1', 'blocks.2', 'blocks.3'])\n# `out` will be a dict of Tensors, each representing a feature map\nout = feature_extractor(torch.zeros(1, 3, 32, 32))\n```\n\nIt\u2019s as simple as that. When it comes down to it, FX feature extraction is just a way of making it possible to do what some of us would have naively hoped for when we first started programming: *\u201cjust give me the output of this code (*points finger at screen)\u201d*.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
{"page_content": "- [ ] \u2026 does not require us to fiddle with source code.\n- [ ] \u2026 provides full flexibility in terms of accessing any intermediate transformation of our inputs, whether they are the results of a module or a functional operation\n- [ ] \u2026 does drop unnecessary computations steps once features have been extracted\n- [ ] \u2026 and I didn\u2019t mention this before, but it\u2019s also TorchScript friendly!\n\nHere\u2019s that table again with another row added for FX feature extraction", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "| | Can use source code as is without any modifications or rewriting | Full flexibility in accessing features | Drops unnecessary computational steps | TorchScript friendly |", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "| Modify forward method | NO | Technically yes. Depends on how much code you\u2019re willing to write. So in practice, NO. | YES | YES |", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "| New module that reuses submodules / parameters of original module | NO | Technically yes. Depends on how much code you\u2019re willing to write. So in practice, NO. | YES | YES |", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "| Hooks | YES | Mostly YES. Only outputs of submodules | NO | NO |", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "| FX | YES | YES | YES | YES |", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "Table 2: A copy of Table 1 with an added row for FX feature extraction. FX feature extraction gets YES across the board!", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "Current FX Limitations\n\nAlthough I would have loved to end the post there, FX does have some of its own limitations which boil down to:\n\n1. There may be some Python code that isn\u2019t yet handled by FX when it comes to the step of interpretation and translation into a graph.\n2. Dynamic control flow can\u2019t be represented in terms of a static graph.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "The easiest thing to do when these problems crop up is to bundle the underlying code into a \u201cleaf node\u201d. Recall the example graph from Figure 3? Conceptually, we may agree that the `submodule` should be treated as a node in itself rather than a set of nodes representing the underlying operations. If we do so, we can redraw the graph as:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "\n\t
\n\t
\n\t\tFigure 5: The individual operations within `submodule` may (left - within red box), may be consolidated into one node (right - node #2) if we consider the `submodule` as a \"leaf\" node.\n
", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "We would want to do so if there is some problematic code within the submodule, but we don\u2019t have any need for extracting any intermediate transformations from within it. In practice, this is easily achievable by providing a keyword argument to create_feature_extractor or get_graph_node_names.\n\n\n```python\nmodel = CNN(3, 4, 10)\nnodes, _ = get_graph_node_names(model, tracer_kwargs={'leaf_modules': [ConvBlock]})\nprint(nodes)", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "for which the output will be:\n\n```python\n['x', 'blocks.0', 'blocks.1', 'blocks.2', 'blocks.3', 'global_pool', 'flatten', 'cls']", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "Notice how, as compared to previously, all the nodes for any given `ConvBlock` are consolidated into a single node.\n\nWe could do something similar with functions. For example, Python\u2019s inbuilt `len` needs to be wrapped and the result should be treated as a leaf node. Here\u2019s how you can do that with core FX functionality:\n\n```python\ntorch.fx.wrap('len')\n\nclass MyModule(nn.Module):\n def forward(self, x):\n x += 1\n len(x)", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "model = MyModule()\nfeature_extractor = create_feature_extractor(model, return_nodes=['add'])", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "For functions you define, you may instead use another keyword argument to `create_feature_extractor` (minor detail: here\u2019s[ why you might want to do it this way instead](https://github.com/pytorch/pytorch/issues/62021#issue-950458396)):\n\n\n```python\ndef myfunc(x):\n return len(x)\n\nclass MyModule(nn.Module):\n def forward(self, x):\n x += 1\n myfunc(x)\n\nmodel = MyModule()\nfeature_extractor = create_feature_extractor(\n model, return_nodes=['add'], tracer_kwargs={'autowrap_functions': [myfunc]})", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "Notice that none of the fixes above involved modifying source code.\n\nOf course, there may be times when the very intermediate transformation one is trying to get access to is within the same forward method or function that is causing problems. Here, we can\u2019t just treat that module or function as a leaf node, because then we can\u2019t access the intermediate transformations within. In these cases, some rewriting of the source code will be needed. Here are some examples (not exhaustive)", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "- FX will raise an error when trying to trace through code with an `assert` statement. In this case you may need to remove that assertion or switch it with [`torch._assert`](https://pytorch.org/docs/stable/generated/torch._assert.html) (this is not a public function - so consider it a bandaid and use with caution).", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "- Symbolically tracing in-place changes to slices of tensors is not supported. You will need to make a new variable for the slice, apply the operation, then reconstruct the original tensor using concatenation or stacking.\n- Representing dynamic control flow in a static graph is just not logically possible. See if you can distill the coded logic down to something that is not dynamic - see FX documentation for tips.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "In general, you may consult the FX documentation for more detail on the [limitations of symbolic tracing](https://pytorch.org/docs/stable/fx.html#limitations-of-symbolic-tracing) and the possible workarounds.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "Conclusion", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "We did a quick recap on feature extraction and why one might want to do it. Although there are existing methods for doing feature extraction in PyTorch they all have rather significant shortcomings. We learned how TorchVision\u2019s FX feature extraction utility works and what makes it so versatile compared to the existing methods. While there are still some minor kinks to iron out for the latter, we understand the limitations, and can trade them off against the limitations of other methods depending on our use", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "case. Hopefully by adding this new utility to your PyTorch toolkit, you\u2019re now equipped to handle the vast majority of feature extraction requirements you may come across.", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "Happy coding!", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch Ecosystem Day 2021 Recap and New Contributor Resources'\nauthor: Team PyTorch\n---\n\nThank you to our incredible community for making the first ever PyTorch Ecosystem Day a success! The day was filled with discussions on new developments, trends and challenges showcased through 71 posters, 32 breakout sessions and 6 keynote speakers. \n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/ecosystem-day-2021-recap/", "category": "pytorch blogs"}}
-{"page_content": "Special thanks to our keynote speakers: Piotr Bialecki, Ritchie Ng, Miquel Farr\u00e9, Joe Spisak, Geeta Chauhan, and Suraj Subramanian who shared updates from the latest release of PyTorch, exciting work being done with partners, use case example from Disney, the growth and development of the PyTorch community in Asia Pacific, and latest contributor highlights.", "metadata": {"source": "https://pytorch.org/blog/ecosystem-day-2021-recap/", "category": "pytorch blogs"}}
-{"page_content": "If you missed the opening talks, you rewatch them here:\n* [Morning/EMEA Opening Talks](https://www.youtube.com/watch?v=MYE01-XaSZA)\n* [Evening/APAC Opening Talks](https://www.youtube.com/watch?v=CjU_6OaYKpw)", "metadata": {"source": "https://pytorch.org/blog/ecosystem-day-2021-recap/", "category": "pytorch blogs"}}
-{"page_content": "In addition to the talks, we had 71 posters covering various topics such as multimodal, NLP, compiler, distributed training, researcher productivity tools, AI accelerators, and more. From the event, it was clear that an underlying thread that ties all of these different projects together is the cross-collaboration of the PyTorch community. Thank you for continuing to push the state of the art with PyTorch!", "metadata": {"source": "https://pytorch.org/blog/ecosystem-day-2021-recap/", "category": "pytorch blogs"}}
-{"page_content": "To view the full catalogue of poster, please visit **[PyTorch Ecosystem Day 2021 Event Page](https://pytorch.org/ecosystem/pted/2021)**.", "metadata": {"source": "https://pytorch.org/blog/ecosystem-day-2021-recap/", "category": "pytorch blogs"}}
-{"page_content": "New Contributor Resources \nToday, we are also sharing new contributor resources that we are trying out to give you the most access to up-to-date news, networking opportunities and more. \n* [Contributor Newsletter](https://pytorch.org/resources/contributors/) - Includes curated news including RFCs, feature roadmaps, notable PRs, editorials from developers, and more to support keeping track of everything that\u2019s happening in our community.", "metadata": {"source": "https://pytorch.org/blog/ecosystem-day-2021-recap/", "category": "pytorch blogs"}}
-{"page_content": "* [Contributors Discussion Forum](https://dev-discuss.pytorch.org/) - Designed for contributors to learn and collaborate on the latest development across PyTorch. \n* [PyTorch Developer Podcast (Beta)](https://pytorch-dev-podcast.simplecast.com/) - Edward Yang, PyTorch Research Scientist, at Facebook AI shares bite-sized (10 to 20 mins) podcast episodes discussing topics about all sorts of internal development topics in PyTorch.", "metadata": {"source": "https://pytorch.org/blog/ecosystem-day-2021-recap/", "category": "pytorch blogs"}}
-{"page_content": "Thank you,\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/ecosystem-day-2021-recap/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"Geospatial deep learning with TorchGeo\"\nauthor: Adam Stewart (University of Illinois at Urbana-Champaign), Caleb Robinson (Microsoft AI for Good Research Lab), Isaac Corley (University of Texas at San Antonio)\nfeatured-img: 'assets/images/torchgeo-hurricane.jpg'\n---\n\nTorchGeo is a PyTorch domain library providing datasets, samplers, transforms, and pre-trained models specific to geospatial data.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n\n https://github.com/microsoft/torchgeo\n
", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "For decades, Earth observation satellites, aircraft, and more recently UAV platforms have been collecting increasing amounts of imagery of the Earth\u2019s surface. With information about seasonal and long-term trends, remotely sensed imagery can be invaluable for solving some of the greatest challenges to humanity, including climate change adaptation, natural disaster monitoring, water resource management, and food security for a growing global population. From a computer vision perspective, this includes", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "applications like land cover mapping (semantic segmentation), deforestation and flood monitoring (change detection), glacial flow (pixel tracking), hurricane tracking and intensity estimation (regression), and building and road detection (object detection, instance segmentation). By leveraging recent advancements in deep learning architectures, cheaper and more powerful GPUs, and petabytes of freely available satellite imagery datasets, we can come closer to solving these important problems.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "| | Can use source code as is without any modifications or rewriting | Full flexibility in accessing features | Drops unnecessary computational steps | TorchScript friendly |\n|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|\n| Modify forward method | NO | Technically yes. Depends on how much code you\u2019re willing to write. So in practice, NO. | YES | YES | \n|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|\n| New module that reuses submodules / parameters of original module | NO | Technically yes. Depends on how much code you\u2019re willing to write. So in practice, NO. | YES | YES |\n|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|\n| Hooks | YES | Mostly YES. Only outputs of submodules | NO | NO |\n|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|\n| FX | YES | YES | YES | YES |\n|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "Table 2: A copy of Table 1 with an added row for FX feature extraction. FX feature extraction gets YES across the board!\n\n\n## Current FX Limitations\n\nAlthough I would have loved to end the post there, FX does have some of its own limitations which boil down to:\n\n1. There may be some Python code that isn\u2019t yet handled by FX when it comes to the step of interpretation and translation into a graph.\n2. Dynamic control flow can\u2019t be represented in terms of a static graph.\n\nThe easiest thing to do when these problems crop up is to bundle the underlying code into a \u201cleaf node\u201d. Recall the example graph from Figure 3? Conceptually, we may agree that the `submodule` should be treated as a node in itself rather than a set of nodes representing the underlying operations. If we do so, we can redraw the graph as:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "\n\t
\n\t
\n\t\tFigure 5: The individual operations within `submodule` may (left - within red box), may be consolidated into one node (right - node #2) if we consider the `submodule` as a \"leaf\" node.\n
\n\n\nWe would want to do so if there is some problematic code within the submodule, but we don\u2019t have any need for extracting any intermediate transformations from within it. In practice, this is easily achievable by providing a keyword argument to create_feature_extractor or get_graph_node_names.\n\n\n```python\nmodel = CNN(3, 4, 10)\nnodes, _ = get_graph_node_names(model, tracer_kwargs={'leaf_modules': [ConvBlock]})\nprint(nodes)\n```\n\nfor which the output will be:", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "```python\n['x', 'blocks.0', 'blocks.1', 'blocks.2', 'blocks.3', 'global_pool', 'flatten', 'cls']\n```\n\nNotice how, as compared to previously, all the nodes for any given `ConvBlock` are consolidated into a single node.\n\nWe could do something similar with functions. For example, Python\u2019s inbuilt `len` needs to be wrapped and the result should be treated as a leaf node. Here\u2019s how you can do that with core FX functionality:\n\n```python\ntorch.fx.wrap('len')\n\nclass MyModule(nn.Module):\n def forward(self, x):\n x += 1\n len(x)\n\nmodel = MyModule()\nfeature_extractor = create_feature_extractor(model, return_nodes=['add'])\n```\n\nFor functions you define, you may instead use another keyword argument to `create_feature_extractor` (minor detail: here\u2019s[ why you might want to do it this way instead](https://github.com/pytorch/pytorch/issues/62021#issue-950458396)):\n\n\n```python\ndef myfunc(x):\n return len(x)\n\nclass MyModule(nn.Module):\n def forward(self, x):\n x += 1\n myfunc(x)", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "model = MyModule()\nfeature_extractor = create_feature_extractor(\n model, return_nodes=['add'], tracer_kwargs={'autowrap_functions': [myfunc]})\n```\n\nNotice that none of the fixes above involved modifying source code.\n\nOf course, there may be times when the very intermediate transformation one is trying to get access to is within the same forward method or function that is causing problems. Here, we can\u2019t just treat that module or function as a leaf node, because then we can\u2019t access the intermediate transformations within. In these cases, some rewriting of the source code will be needed. Here are some examples (not exhaustive)", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "- FX will raise an error when trying to trace through code with an `assert` statement. In this case you may need to remove that assertion or switch it with [`torch._assert`](https://pytorch.org/docs/stable/generated/torch._assert.html) (this is not a public function - so consider it a bandaid and use with caution).\n- Symbolically tracing in-place changes to slices of tensors is not supported. You will need to make a new variable for the slice, apply the operation, then reconstruct the original tensor using concatenation or stacking.\n- Representing dynamic control flow in a static graph is just not logically possible. See if you can distill the coded logic down to something that is not dynamic - see FX documentation for tips.\n\nIn general, you may consult the FX documentation for more detail on the [limitations of symbolic tracing](https://pytorch.org/docs/stable/fx.html#limitations-of-symbolic-tracing) and the possible workarounds.\n\n## Conclusion", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "## Conclusion\n\nWe did a quick recap on feature extraction and why one might want to do it. Although there are existing methods for doing feature extraction in PyTorch they all have rather significant shortcomings. We learned how TorchVision\u2019s FX feature extraction utility works and what makes it so versatile compared to the existing methods. While there are still some minor kinks to iron out for the latter, we understand the limitations, and can trade them off against the limitations of other methods depending on our use case. Hopefully by adding this new utility to your PyTorch toolkit, you\u2019re now equipped to handle the vast majority of feature extraction requirements you may come across.\n\nHappy coding!", "metadata": {"source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch Ecosystem Day 2021 Recap and New Contributor Resources'\nauthor: Team PyTorch\n---\n\nThank you to our incredible community for making the first ever PyTorch Ecosystem Day a success! The day was filled with discussions on new developments, trends and challenges showcased through 71 posters, 32 breakout sessions and 6 keynote speakers. \n\n\n

\n
\n\nSpecial thanks to our keynote speakers: Piotr Bialecki, Ritchie Ng, Miquel Farr\u00e9, Joe Spisak, Geeta Chauhan, and Suraj Subramanian who shared updates from the latest release of PyTorch, exciting work being done with partners, use case example from Disney, the growth and development of the PyTorch community in Asia Pacific, and latest contributor highlights.", "metadata": {"source": "https://pytorch.org/blog/ecosystem-day-2021-recap/", "category": "pytorch blogs"}}
+{"page_content": "If you missed the opening talks, you rewatch them here:\n* [Morning/EMEA Opening Talks](https://www.youtube.com/watch?v=MYE01-XaSZA)\n* [Evening/APAC Opening Talks](https://www.youtube.com/watch?v=CjU_6OaYKpw)\n\nIn addition to the talks, we had 71 posters covering various topics such as multimodal, NLP, compiler, distributed training, researcher productivity tools, AI accelerators, and more. From the event, it was clear that an underlying thread that ties all of these different projects together is the cross-collaboration of the PyTorch community. Thank you for continuing to push the state of the art with PyTorch! \n\nTo view the full catalogue of poster, please visit **[PyTorch Ecosystem Day 2021 Event Page](https://pytorch.org/ecosystem/pted/2021)**.", "metadata": {"source": "https://pytorch.org/blog/ecosystem-day-2021-recap/", "category": "pytorch blogs"}}
+{"page_content": "### New Contributor Resources \nToday, we are also sharing new contributor resources that we are trying out to give you the most access to up-to-date news, networking opportunities and more. \n* [Contributor Newsletter](https://pytorch.org/resources/contributors/) - Includes curated news including RFCs, feature roadmaps, notable PRs, editorials from developers, and more to support keeping track of everything that\u2019s happening in our community. \n* [Contributors Discussion Forum](https://dev-discuss.pytorch.org/) - Designed for contributors to learn and collaborate on the latest development across PyTorch. \n* [PyTorch Developer Podcast (Beta)](https://pytorch-dev-podcast.simplecast.com/) - Edward Yang, PyTorch Research Scientist, at Facebook AI shares bite-sized (10 to 20 mins) podcast episodes discussing topics about all sorts of internal development topics in PyTorch.\n\nThank you,\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/ecosystem-day-2021-recap/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"Geospatial deep learning with TorchGeo\"\nauthor: Adam Stewart (University of Illinois at Urbana-Champaign), Caleb Robinson (Microsoft AI for Good Research Lab), Isaac Corley (University of Texas at San Antonio)\nfeatured-img: 'assets/images/torchgeo-hurricane.jpg'\n---\n\nTorchGeo is a PyTorch domain library providing datasets, samplers, transforms, and pre-trained models specific to geospatial data.\n\n\n
\n
\n\n\n https://github.com/microsoft/torchgeo\n
", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "For decades, Earth observation satellites, aircraft, and more recently UAV platforms have been collecting increasing amounts of imagery of the Earth\u2019s surface. With information about seasonal and long-term trends, remotely sensed imagery can be invaluable for solving some of the greatest challenges to humanity, including climate change adaptation, natural disaster monitoring, water resource management, and food security for a growing global population. From a computer vision perspective, this includes applications like land cover mapping (semantic segmentation), deforestation and flood monitoring (change detection), glacial flow (pixel tracking), hurricane tracking and intensity estimation (regression), and building and road detection (object detection, instance segmentation). By leveraging recent advancements in deep learning architectures, cheaper and more powerful GPUs, and petabytes of freely available satellite imagery datasets, we can come closer to solving these important problems.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
{"page_content": "\n
\n
\n\n\nNational Oceanic and Atmospheric Administration satellite image of Hurricane Katrina, taken on August 28, 2005 (source). Geospatial machine learning libraries like TorchGeo can be used to detect, track, and predict future trajectories of hurricanes and other natural disasters.\n
\n\n# The challenges", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "In traditional computer vision datasets, such as ImageNet, the image files themselves tend to be rather simple and easy to work with. Most images have 3 spectral bands (RGB), are stored in common file formats like PNG or JPEG, and can be easily loaded with popular software libraries like [PIL](https://pillow.readthedocs.io/en/stable/) or [OpenCV](https://opencv.org/). Each image in these datasets is usually small enough to pass directly into a neural network. Furthermore, most of these datasets contain a", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "finite number of well-curated images that are assumed to be independent and identically distributed, making train-val-test splits straightforward. As a result of this relative homogeneity, the same pre-trained models (e.g., CNNs pretrained on ImageNet) have shown to be effective across a wide range of vision tasks using transfer learning methods. Existing libraries, such as [torchvision](https://github.com/pytorch/vision), handle these simple cases well, and have been used to make large advances in vision", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "tasks over the past decade.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "Remote sensing imagery is not so uniform. Instead of simple RGB images, satellites tend to capture images that are multispectral ([Landsat 8](https://www.usgs.gov/landsat-missions) has 11 spectral bands) or even hyperspectral ([Hyperion](https://www.usgs.gov/centers/eros/science/usgs-eros-archive-earth-observing-one-eo-1-hyperion) has 242 spectral bands). These images capture information at a wider range of wavelengths (400 nm\u201315 \u00b5m), far outside of the visible spectrum. Different satellites also have very", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "different spatial resolutions\u2014[GOES](https://www.goes.noaa.gov/) has a resolution of 4 km/px, [Maxar](https://www.maxar.com/products/satellite-imagery) imagery is 30 cm/px, and drone imagery resolution can be as high as 7 mm/px. These datasets almost always have a temporal component, with satellite revisists that are daily, weekly, or biweekly. Images often have overlap with other images in the dataset, and need to be stitched together based on geographic metadata. These images tend to be very large (e.g.,", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "10K x 10K pixels), so it isn't possible to pass an entire image through a neural network. This data is distributed in hundreds of different raster and vector file formats like GeoTIFF and ESRI Shapefile, requiring specialty libraries like [GDAL](https://gdal.org/) to load.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "\nFrom left to right: Mercator, Albers Equal Area, and Interrupted Goode Homolosine projections (source). Geospatial data is associated with one of many different types of reference systems that project the 3D Earth onto a 2D representation. Combining data from different sources often involves re-projecting to a common reference system in order to ensure that all layers are aligned.\n
", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "Although each image is 2D, the Earth itself is 3D. In order to stitch together images, they first need to be projected onto a 2D representation of the Earth, called a coordinate reference system (CRS). Most people are familiar with equal angle representations like Mercator that distort the size of regions (Greenland looks larger than Africa even though Africa is 15x larger), but there are many other CRSs that are commonly used. Each dataset may use a different CRS, and each image within a single dataset may", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "also be in a unique CRS. In order to use data from multiple layers, they must all share a common CRS, otherwise the data won't be properly aligned. For those who aren't familiar with remote sensing data, this can be a daunting task.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n\nEven if you correctly georeference images during indexing, if you don't project them to a common CRS, you'll end up with rotated images with nodata values around them, and the images won't be pixel-aligned.\n
\n\n# The solution", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "At the moment, it can be quite challenging to work with both deep learning models and geospatial data without having expertise in both of these very different fields. To address these challenges, we've built TorchGeo, a PyTorch domain library for working with geospatial data. TorchGeo is designed to make it simple:\n\n1. for machine learning experts to work with geospatial data, and\n2. for remote sensing experts to explore machine learning solutions.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "TorchGeo is not just a research project, but a production-quality library that uses continuous integration to test every commit with a range of Python versions on a range of platforms (Linux, macOS, Windows). It can be easily installed with any of your favorite package managers, including pip, conda, and [spack](https://spack.io):\n\n```\n$ pip install torchgeo", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "TorchGeo is designed to have the same API as other PyTorch domain libraries like torchvision, torchtext, and torchaudio. If you already use torchvision in your workflow for computer vision datasets, you can switch to TorchGeo by changing only a few lines of code. All TorchGeo datasets and samplers are compatible with the PyTorch ``DataLoader`` class, meaning that you can take advantage of wrapper libraries like [PyTorch Lightning](https://www.pytorchlightning.ai/) for distributed training. In the following", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "sections, we'll explore possible use cases for TorchGeo to show how simple it is to use.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "# Geospatial datasets and samplers\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "\nExample application in which we combine A) a scene from Landsat 8 and B) Cropland Data Layer labels, even though these files are in different EPSG projections. We want to sample patches C) and D) from these datasets using a geospatial bounding box as an index.\n
", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "Many remote sensing applications involve working with [*geospatial datasets*](https://torchgeo.readthedocs.io/en/latest/api/datasets.html#geospatial-datasets) \u2014datasets with geographic metadata. In TorchGeo, we define a ``GeoDataset`` class to represent these kinds of datasets. Instead of being indexed by an integer, each ``GeoDataset`` is indexed by a spatiotemporal bounding box, meaning that two or more datasets covering a different geographic extent can be intelligently combined.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "In this example, we show how easy it is to work with geospatial data and to sample small image patches from a combination of Landsat and Cropland Data Layer (CDL) data using TorchGeo. First, we assume that the user has Landsat 7 and 8 imagery downloaded. Since Landsat 8 has more spectral bands than Landsat 7, we'll only use the bands that both satellites have in common. We'll create a single dataset including all images from both Landsat 7 and 8 data by taking the union between these two datasets.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "```c++\nfrom torch.utils.data import DataLoader\nfrom torchgeo.datasets import CDL, Landsat7, Landsat8, stack_samples\nfrom torchgeo.samplers import RandomGeoSampler\n\nlandsat7 = Landsat7(root=\"...\")\nlandsat8 = Landsat8(root=\"...\", bands=Landsat8.all_bands[1:-2])\nlandsat = landsat7 | landsat8", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "Next, we take the intersection between this dataset and the CDL dataset. We want to take the intersection instead of the union to ensure that we only sample from regions where we have both Landsat and CDL data. Note that we can automatically download and checksum CDL data. Also note that each of these datasets may contain files in different CRSs or resolutions, but TorchGeo automatically ensures that a matching CRS and resolution is used.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "```c++\ncdl = CDL(root=\"...\", download=True, checksum=True)\ndataset = landsat & cdl", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "This dataset can now be used with a PyTorch data loader. Unlike benchmark datasets, geospatial datasets often include very large images. For example, the CDL dataset consists of a single image covering the entire contiguous United States. In order to sample from these datasets using geospatial coordinates, TorchGeo defines a number of [*samplers*](https://torchgeo.readthedocs.io/en/latest/api/samplers.html). In this example, we'll use a random sampler that returns 256 x 256 pixel images and 10,000 samples", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "per epoch. We'll also use a custom collation function to combine each sample dictionary into a mini-batch of samples.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "```c++\nsampler = RandomGeoSampler(dataset, size=256, length=10000)\ndataloader = DataLoader(dataset, batch_size=128, sampler=sampler, collate_fn=stack_samples)", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "This data loader can now be used in your normal training/evaluation pipeline.\n\n```c++\nfor batch in dataloader:\n image = batch[\"image\"]\n mask = batch[\"mask\"]\n\n # train a model, or make predictions using a pre-trained model", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "Many applications involve intelligently composing datasets based on geospatial metadata like this. For example, users may want to:\n\n- Combine datasets for multiple image sources and treat them as equivalent (e.g., Landsat 7 and 8)\n- Combine datasets for disparate geospatial locations (e.g., Chesapeake NY and PA)\n\nThese combinations require that all queries are present in *at least one* dataset, and can be created using a ``UnionDataset``. Similarly, users may want to:", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "- Combine image and target labels and sample from both simultaneously (e.g., Landsat and CDL)\n- Combine datasets for multiple image sources for multimodal learning or data fusion (e.g., Landsat and Sentinel)\n\nThese combinations require that all queries are present in *both* datasets, and can be created using an ``IntersectionDataset``. TorchGeo automatically composes these datasets for you when you use the intersection (``&``) and union \\(``|``\\) operators.\n\n# Multispectral and geospatial transforms", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "In deep learning, it's common to augment and transform the data so that models are robust to variations in the input space. Geospatial data can have variations such as seasonal changes and warping effects, as well as image processing and capture issues like cloud cover and atmospheric distortion. TorchGeo utilizes augmentations and transforms from the [Kornia](https://kornia.github.io/) library, which supports GPU acceleration and supports multispectral imagery with more than 3 channels.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "Traditional geospatial analyses compute and visualize spectral indices which are combinations of multispectral bands. Spectral indices are designed to highlight areas of interest in a multispectral image relevant to some application, such as vegetation health, areas of man-made change or increasing urbanization, or snow cover. TorchGeo supports numerous [*transforms*](https://torchgeo.readthedocs.io/en/latest/api/transforms.html), which can compute common spectral indices and append them as additional bands", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "to a multispectral image tensor.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "Below, we show a simple example where we compute the Normalized Difference Vegetation Index (NDVI) on a Sentinel-2 image. NDVI measures the presence of vegetation and vegetation health and is computed as the normalized difference between the red and near-infrared (NIR) spectral bands. Spectral index transforms operate on sample dictionaries returned from TorchGeo datasets and append the resulting spectral index to the image channel dimension.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "First, we instantiate a Sentinel-2 dataset and load a sample image. Then, we plot the true color (RGB) representation of this data to see the region we are looking at.\n\n```c++\nimport matplotlib.pyplot as plt\nfrom torchgeo.datasets import Sentinel2\nfrom torchgeo.transforms import AppendNDVI\n\ndataset = Sentinel2(root=\"...\")\nsample = dataset[...]\nfig = dataset.plot(sample)\nplt.show()", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "Next, we instantiate and compute an NDVI transform, appending this new channel to the end of the image. Sentinel-2 imagery uses index 0 for its red band and index 3 for its NIR band. In order to visualize the data, we also normalize the image. NDVI values can range from -1 to 1, but we want to use the range 0 to 1 for plotting.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "```c++\ntransform = AppendNDVI(index_red=0, index_nir=3)\nsample = transform(sample)\nsample[\"image\"][-1] = (sample[\"image\"][-1] + 1) / 2\nplt.imshow(sample[\"image\"][-1], cmap=\"RdYlGn_r\")\nplt.show()", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n\nTrue color (left) and NDVI (right) of the Texas Hill Region, taken on November 16, 2018 by the Sentinel-2 satellite. In the NDVI image, red indicates water bodies, yellow indicates barren soil, light green indicates unhealthy vegetation, and dark green indicates healthy vegetation.\n
\n\n# Benchmark datasets", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "One of the driving factors behind progress in computer vision is the existence of standardized benchmark datasets like ImageNet and MNIST. Using these datasets, researchers can directly compare the performance of different models and training procedures to determine which perform the best. In the remote sensing domain, there are many such datasets, but due to the aforementioned difficulties of working with this data and the lack of existing libraries for loading these datasets, many researchers opt to use", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "their own custom datasets.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "One of the goals of TorchGeo is to provide easy-to-use data loaders for these existing datasets. TorchGeo includes a number of [*benchmark datasets*](https://torchgeo.readthedocs.io/en/latest/api/datasets.html#non-geospatial-datasets) \u2014datasets that include both input images and target labels. This includes datasets for tasks like image classification, regression, semantic segmentation, object detection, instance segmentation, change detection, and more.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "If you've used torchvision before, these types of datasets should be familiar. In this example, we'll create a dataset for the Northwestern Polytechnical University (NWPU) very-high-resolution ten-class (VHR-10) geospatial object detection dataset. This dataset can be automatically downloaded, checksummed, and extracted, just like with torchvision.\n\n```c++\nfrom torch.utils.data import DataLoader\nfrom torchgeo.datasets import VHR10", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "dataset = VHR10(root=\"...\", download=True, checksum=True)\ndataloader = DataLoader(dataset, batch_size=128, shuffle=True, num_workers=4)\n\nfor batch in dataloader:\n image = batch[\"image\"]\n label = batch[\"label\"]\n\n # train a model, or make predictions using a pre-trained model", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "All TorchGeo datasets are compatible with PyTorch data loaders, making them easy to integrate into existing training workflows. The only difference between a benchmark dataset in TorchGeo and a similar dataset in torchvision is that each dataset returns a dictionary with keys for each PyTorch ``Tensor``.\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "\nExample predictions from a Mask R-CNN model trained on the NWPU VHR-10 dataset. The model predicts sharp bounding boxes and masks for all objects with high confidence scores.\n
\n\n# Reproducibility with PyTorch Lightning", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "In traditional computer vision datasets, such as ImageNet, the image files themselves tend to be rather simple and easy to work with. Most images have 3 spectral bands (RGB), are stored in common file formats like PNG or JPEG, and can be easily loaded with popular software libraries like [PIL](https://pillow.readthedocs.io/en/stable/) or [OpenCV](https://opencv.org/). Each image in these datasets is usually small enough to pass directly into a neural network. Furthermore, most of these datasets contain a finite number of well-curated images that are assumed to be independent and identically distributed, making train-val-test splits straightforward. As a result of this relative homogeneity, the same pre-trained models (e.g., CNNs pretrained on ImageNet) have shown to be effective across a wide range of vision tasks using transfer learning methods. Existing libraries, such as [torchvision](https://github.com/pytorch/vision), handle these simple cases well, and have been used to make large advances in vision tasks over the past decade.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "Remote sensing imagery is not so uniform. Instead of simple RGB images, satellites tend to capture images that are multispectral ([Landsat 8](https://www.usgs.gov/landsat-missions) has 11 spectral bands) or even hyperspectral ([Hyperion](https://www.usgs.gov/centers/eros/science/usgs-eros-archive-earth-observing-one-eo-1-hyperion) has 242 spectral bands). These images capture information at a wider range of wavelengths (400 nm\u201315 \u00b5m), far outside of the visible spectrum. Different satellites also have very different spatial resolutions\u2014[GOES](https://www.goes.noaa.gov/) has a resolution of 4 km/px, [Maxar](https://www.maxar.com/products/satellite-imagery) imagery is 30 cm/px, and drone imagery resolution can be as high as 7 mm/px. These datasets almost always have a temporal component, with satellite revisists that are daily, weekly, or biweekly. Images often have overlap with other images in the dataset, and need to be stitched together based on geographic metadata. These images tend to be very large (e.g., 10K x 10K pixels), so it isn't possible to pass an entire image through a neural network. This data is distributed in hundreds of different raster and vector file formats like GeoTIFF and ESRI Shapefile, requiring specialty libraries like [GDAL](https://gdal.org/) to load.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
\n\n\n\nFrom left to right: Mercator, Albers Equal Area, and Interrupted Goode Homolosine projections (source). Geospatial data is associated with one of many different types of reference systems that project the 3D Earth onto a 2D representation. Combining data from different sources often involves re-projecting to a common reference system in order to ensure that all layers are aligned.\n
", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "Although each image is 2D, the Earth itself is 3D. In order to stitch together images, they first need to be projected onto a 2D representation of the Earth, called a coordinate reference system (CRS). Most people are familiar with equal angle representations like Mercator that distort the size of regions (Greenland looks larger than Africa even though Africa is 15x larger), but there are many other CRSs that are commonly used. Each dataset may use a different CRS, and each image within a single dataset may also be in a unique CRS. In order to use data from multiple layers, they must all share a common CRS, otherwise the data won't be properly aligned. For those who aren't familiar with remote sensing data, this can be a daunting task.\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "\nEven if you correctly georeference images during indexing, if you don't project them to a common CRS, you'll end up with rotated images with nodata values around them, and the images won't be pixel-aligned.\n
\n\n# The solution\n\nAt the moment, it can be quite challenging to work with both deep learning models and geospatial data without having expertise in both of these very different fields. To address these challenges, we've built TorchGeo, a PyTorch domain library for working with geospatial data. TorchGeo is designed to make it simple:\n\n1. for machine learning experts to work with geospatial data, and\n2. for remote sensing experts to explore machine learning solutions.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "TorchGeo is not just a research project, but a production-quality library that uses continuous integration to test every commit with a range of Python versions on a range of platforms (Linux, macOS, Windows). It can be easily installed with any of your favorite package managers, including pip, conda, and [spack](https://spack.io):\n\n```\n$ pip install torchgeo\n```\n\nTorchGeo is designed to have the same API as other PyTorch domain libraries like torchvision, torchtext, and torchaudio. If you already use torchvision in your workflow for computer vision datasets, you can switch to TorchGeo by changing only a few lines of code. All TorchGeo datasets and samplers are compatible with the PyTorch ``DataLoader`` class, meaning that you can take advantage of wrapper libraries like [PyTorch Lightning](https://www.pytorchlightning.ai/) for distributed training. In the following sections, we'll explore possible use cases for TorchGeo to show how simple it is to use.\n\n# Geospatial datasets and samplers", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
\n\n\nExample application in which we combine A) a scene from Landsat 8 and B) Cropland Data Layer labels, even though these files are in different EPSG projections. We want to sample patches C) and D) from these datasets using a geospatial bounding box as an index.\n
\n\nMany remote sensing applications involve working with [*geospatial datasets*](https://torchgeo.readthedocs.io/en/latest/api/datasets.html#geospatial-datasets) \u2014datasets with geographic metadata. In TorchGeo, we define a ``GeoDataset`` class to represent these kinds of datasets. Instead of being indexed by an integer, each ``GeoDataset`` is indexed by a spatiotemporal bounding box, meaning that two or more datasets covering a different geographic extent can be intelligently combined.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "In this example, we show how easy it is to work with geospatial data and to sample small image patches from a combination of Landsat and Cropland Data Layer (CDL) data using TorchGeo. First, we assume that the user has Landsat 7 and 8 imagery downloaded. Since Landsat 8 has more spectral bands than Landsat 7, we'll only use the bands that both satellites have in common. We'll create a single dataset including all images from both Landsat 7 and 8 data by taking the union between these two datasets.\n\n```c++\nfrom torch.utils.data import DataLoader\nfrom torchgeo.datasets import CDL, Landsat7, Landsat8, stack_samples\nfrom torchgeo.samplers import RandomGeoSampler\n\nlandsat7 = Landsat7(root=\"...\")\nlandsat8 = Landsat8(root=\"...\", bands=Landsat8.all_bands[1:-2])\nlandsat = landsat7 | landsat8\n```", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "Next, we take the intersection between this dataset and the CDL dataset. We want to take the intersection instead of the union to ensure that we only sample from regions where we have both Landsat and CDL data. Note that we can automatically download and checksum CDL data. Also note that each of these datasets may contain files in different CRSs or resolutions, but TorchGeo automatically ensures that a matching CRS and resolution is used.\n\n```c++\ncdl = CDL(root=\"...\", download=True, checksum=True)\ndataset = landsat & cdl\n```", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "This dataset can now be used with a PyTorch data loader. Unlike benchmark datasets, geospatial datasets often include very large images. For example, the CDL dataset consists of a single image covering the entire contiguous United States. In order to sample from these datasets using geospatial coordinates, TorchGeo defines a number of [*samplers*](https://torchgeo.readthedocs.io/en/latest/api/samplers.html). In this example, we'll use a random sampler that returns 256 x 256 pixel images and 10,000 samples per epoch. We'll also use a custom collation function to combine each sample dictionary into a mini-batch of samples.\n\n```c++\nsampler = RandomGeoSampler(dataset, size=256, length=10000)\ndataloader = DataLoader(dataset, batch_size=128, sampler=sampler, collate_fn=stack_samples)\n```\n\nThis data loader can now be used in your normal training/evaluation pipeline.\n\n```c++\nfor batch in dataloader:\n image = batch[\"image\"]\n mask = batch[\"mask\"]", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "# train a model, or make predictions using a pre-trained model\n```\n\nMany applications involve intelligently composing datasets based on geospatial metadata like this. For example, users may want to:\n\n- Combine datasets for multiple image sources and treat them as equivalent (e.g., Landsat 7 and 8)\n- Combine datasets for disparate geospatial locations (e.g., Chesapeake NY and PA)\n\nThese combinations require that all queries are present in *at least one* dataset, and can be created using a ``UnionDataset``. Similarly, users may want to:\n\n- Combine image and target labels and sample from both simultaneously (e.g., Landsat and CDL)\n- Combine datasets for multiple image sources for multimodal learning or data fusion (e.g., Landsat and Sentinel)\n\nThese combinations require that all queries are present in *both* datasets, and can be created using an ``IntersectionDataset``. TorchGeo automatically composes these datasets for you when you use the intersection (``&``) and union \\(``|``\\) operators.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "# Multispectral and geospatial transforms\n\nIn deep learning, it's common to augment and transform the data so that models are robust to variations in the input space. Geospatial data can have variations such as seasonal changes and warping effects, as well as image processing and capture issues like cloud cover and atmospheric distortion. TorchGeo utilizes augmentations and transforms from the [Kornia](https://kornia.github.io/) library, which supports GPU acceleration and supports multispectral imagery with more than 3 channels.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "Traditional geospatial analyses compute and visualize spectral indices which are combinations of multispectral bands. Spectral indices are designed to highlight areas of interest in a multispectral image relevant to some application, such as vegetation health, areas of man-made change or increasing urbanization, or snow cover. TorchGeo supports numerous [*transforms*](https://torchgeo.readthedocs.io/en/latest/api/transforms.html), which can compute common spectral indices and append them as additional bands to a multispectral image tensor.\n\nBelow, we show a simple example where we compute the Normalized Difference Vegetation Index (NDVI) on a Sentinel-2 image. NDVI measures the presence of vegetation and vegetation health and is computed as the normalized difference between the red and near-infrared (NIR) spectral bands. Spectral index transforms operate on sample dictionaries returned from TorchGeo datasets and append the resulting spectral index to the image channel dimension.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "First, we instantiate a Sentinel-2 dataset and load a sample image. Then, we plot the true color (RGB) representation of this data to see the region we are looking at.\n\n```c++\nimport matplotlib.pyplot as plt\nfrom torchgeo.datasets import Sentinel2\nfrom torchgeo.transforms import AppendNDVI\n\ndataset = Sentinel2(root=\"...\")\nsample = dataset[...]\nfig = dataset.plot(sample)\nplt.show()\n```\n\nNext, we instantiate and compute an NDVI transform, appending this new channel to the end of the image. Sentinel-2 imagery uses index 0 for its red band and index 3 for its NIR band. In order to visualize the data, we also normalize the image. NDVI values can range from -1 to 1, but we want to use the range 0 to 1 for plotting.\n\n```c++\ntransform = AppendNDVI(index_red=0, index_nir=3)\nsample = transform(sample)\nsample[\"image\"][-1] = (sample[\"image\"][-1] + 1) / 2\nplt.imshow(sample[\"image\"][-1], cmap=\"RdYlGn_r\")\nplt.show()\n```\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "\nTrue color (left) and NDVI (right) of the Texas Hill Region, taken on November 16, 2018 by the Sentinel-2 satellite. In the NDVI image, red indicates water bodies, yellow indicates barren soil, light green indicates unhealthy vegetation, and dark green indicates healthy vegetation.\n
\n\n# Benchmark datasets\n\nOne of the driving factors behind progress in computer vision is the existence of standardized benchmark datasets like ImageNet and MNIST. Using these datasets, researchers can directly compare the performance of different models and training procedures to determine which perform the best. In the remote sensing domain, there are many such datasets, but due to the aforementioned difficulties of working with this data and the lack of existing libraries for loading these datasets, many researchers opt to use their own custom datasets.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "One of the goals of TorchGeo is to provide easy-to-use data loaders for these existing datasets. TorchGeo includes a number of [*benchmark datasets*](https://torchgeo.readthedocs.io/en/latest/api/datasets.html#non-geospatial-datasets) \u2014datasets that include both input images and target labels. This includes datasets for tasks like image classification, regression, semantic segmentation, object detection, instance segmentation, change detection, and more.\n\nIf you've used torchvision before, these types of datasets should be familiar. In this example, we'll create a dataset for the Northwestern Polytechnical University (NWPU) very-high-resolution ten-class (VHR-10) geospatial object detection dataset. This dataset can be automatically downloaded, checksummed, and extracted, just like with torchvision.\n\n```c++\nfrom torch.utils.data import DataLoader\nfrom torchgeo.datasets import VHR10", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "dataset = VHR10(root=\"...\", download=True, checksum=True)\ndataloader = DataLoader(dataset, batch_size=128, shuffle=True, num_workers=4)\n\nfor batch in dataloader:\n image = batch[\"image\"]\n label = batch[\"label\"]\n\n # train a model, or make predictions using a pre-trained model\n```\n\nAll TorchGeo datasets are compatible with PyTorch data loaders, making them easy to integrate into existing training workflows. The only difference between a benchmark dataset in TorchGeo and a similar dataset in torchvision is that each dataset returns a dictionary with keys for each PyTorch ``Tensor``.\n\n\n
\n
\n\n\nExample predictions from a Mask R-CNN model trained on the NWPU VHR-10 dataset. The model predicts sharp bounding boxes and masks for all objects with high confidence scores.\n
\n\n# Reproducibility with PyTorch Lightning", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
{"page_content": "Another key goal of TorchGeo is reproducibility. For many of these benchmark datasets, there is no predefined train-val-test split, or the predefined split has issues with class imbalance or geographic distribution. As a result, the performance metrics reported in the literature either can't be reproduced, or aren't indicative of how well a pre-trained model would work in a different geographic location.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "In order to facilitate direct comparisons between results published in the literature and further reduce the boilerplate code needed to run experiments with datasets in TorchGeo, we have created PyTorch Lightning [*datamodules*](https://torchgeo.readthedocs.io/en/latest/api/datamodules.html) with well-defined train-val-test splits and [*trainers*](https://torchgeo.readthedocs.io/en/latest/api/trainers.html) for various tasks like classification, regression, and semantic segmentation. These datamodules show", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "how to incorporate augmentations from the kornia library, include preprocessing transforms (with pre-calculated channel statistics), and let users easily experiment with hyperparameters related to the data itself (as opposed to the modeling process). Training a semantic segmentation model on the Inria Aerial Image Labeling dataset is as easy as a few imports and four lines of code.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "```c++\nfrom pytorch_lightning import Trainer\nfrom torchgeo.datamodules import InriaAerialImageLabelingDataModule\nfrom torchgeo.trainers import SemanticSegmentationTask\n\ndatamodule = InriaAerialImageLabelingDataModule(root_dir=\"...\", batch_size=64, num_workers=6)\ntask = SemanticSegmentationTask(segmentation_model=\"unet\", encoder_weights=\"imagenet\", learning_rate=0.1)\ntrainer = Trainer(gpus=1, default_root_dir=\"...\")\n\ntrainer.fit(model=task, datamodule=datamodule)", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n\nBuilding segmentations produced by a U-Net model trained on the Inria Aerial Image Labeling dataset. Reproducing these results is as simple as a few imports and four lines of code, making comparison of different models and training techniques simple and easy.\n
", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "In our [preprint](https://arxiv.org/abs/2111.08872) we show a set of results that use the aforementioned datamodules and trainers to benchmark simple modeling approaches for several of the datasets in TorchGeo. For example, we find that a simple ResNet-50 can achieve state-of-the-art performance on the [So2Sat](https://ieeexplore.ieee.org/document/9014553) dataset. These types of baseline results are important for evaluating the contribution of different modeling choices when tackling problems with remotely", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "sensed data.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "# Future work and contributing\n\nThere is still a lot of remaining work to be done in order to make TorchGeo as easy to use as possible, especially for users without prior deep learning experience. One of the ways in which we plan to achieve this is by expanding our tutorials to include subjects like \"writing a custom dataset\" and \"transfer learning\", or tasks like \"land cover mapping\" and \"object detection\".", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "Another important project we are working on is pre-training models. Most remote sensing researchers work with very small labeled datasets, and could benefit from pre-trained models and transfer learning approaches. TorchGeo is the first deep learning library to provide models pre-trained on multispectral imagery. Our goal is to provide models for different image modalities (optical, SAR, multispectral) and specific platforms (Landsat, Sentinel, MODIS) as well as benchmark results showing their performance", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "with different amounts of training data. Self-supervised learning is a promising method for training such models. Satellite imagery datasets often contain petabytes of imagery, but accurately labeled datasets are much harder to come by. Self-supervised learning methods will allow us to train directly on the raw imagery without needing large labeled datasets.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "Aside from these larger projects, we're always looking to add new datasets, data augmentation transforms, and sampling strategies. If you're Python savvy and interested in contributing to TorchGeo, we would love to see contributions! TorchGeo is open source under an MIT license, so you can use it in almost any project.\n\nExternal links:", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "- **Homepage**: [https://github.com/microsoft/torchgeo](https://github.com/microsoft/torchgeo)\n- **Documentation**: [https://torchgeo.readthedocs.io/](https://torchgeo.readthedocs.io/)\n- **PyPI**: [https://pypi.org/project/torchgeo/](https://pypi.org/project/torchgeo/)\n- **Paper**: [https://arxiv.org/abs/2111.08872](https://arxiv.org/abs/2111.08872)\n\nIf you like TorchGeo, give us a star on GitHub! And if you use TorchGeo in your work, please cite our paper.\n\n# Acknowledgments", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "*We would like to thank all TorchGeo contributors for their efforts in creating the library, the Microsoft AI for Good program for support, and the PyTorch Team for their guidance. This research is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993), the State of Illinois, and as of December, 2019, the National Geospatial-Intelligence Agency. Blue Waters is a joint effort of the University of Illinois at", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "Urbana-Champaign and its National Center for Supercomputing Applications. The research was supported in part by NSF grants IIS-1908104, OAC-1934634, and DBI-2021898.*", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'What\u2019s New in PyTorch Profiler 1.9?'\nauthor: Sabrina Smai, Program Manager on the AI Framework team at Microsoft\n---", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch Profiler v1.9 has been released! The goal of this new release (previous [PyTorch Profiler release](https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/)) is to provide you with new state-of-the-art tools to help diagnose and fix machine learning performance issues regardless of whether you are working on one or numerous machines. The objective is to target the execution steps that are the most costly in time and/or memory, and visualize the work load", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "distribution between GPUs and CPUs.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Here is a summary of the five major features being released:", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "1.\t**Distributed Training View**: This helps you understand how much time and memory is consumed in your distributed training job. Many issues occur when you take a training model and split the load into worker nodes to be run in parallel as it can be a black box. The overall model goal is to speed up model training. This distributed training view will help you diagnose and debug issues within individual nodes.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "2.\t**Memory View**: This view allows you to understand your memory usage better. This tool will help you avoid the famously pesky Out of Memory error by showing active memory allocations at various points of your program run. \n3.\t**GPU Utilization Visualization**: This tool helps you make sure that your GPU is being fully utilized. \n4.\t**Cloud Storage Support**: Tensorboard plugin can now read profiling data from Azure Blob Storage, Amazon S3, and Google Cloud Platform.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "5.\t**Jump to Source Code**: This feature allows you to visualize stack tracing information and jump directly into the source code. This helps you quickly optimize and iterate on your code based on your profiling results.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Getting Started with PyTorch Profiling Tool\nPyTorch includes a profiling functionality called \u00ab PyTorch Profiler \u00bb. The PyTorch Profiler tutorial can be found [here](https://pytorch.org/tutorials/intermediate/tensorboard_profiler_tutorial.html).\n\nTo instrument your PyTorch code for profiling, you must:\n\n$ pip install torch-tb-profiler\n\n```python\nimport torch.profiler as profiler\nWith profiler.profile(XXXX)", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "**Comments**:\n\n\u2022 For CUDA and CPU profiling, see [below](https://github.com/pytorch/kineto/blob/master/tb_plugin/examples/resnet50_profiler_api.py): \n```\nwith torch.profiler.profile( \nactivities=[ \ntorch.profiler.ProfilerActivity.CPU, \ntorch.profiler.ProfilerActivity.CUDA], \n```\n\n\u2022\tWith profiler.record_function(\u201c$NAME\u201d): allows putting a decorator (a tag associated to a name) for a block of function\n\n\u2022\tProfile_memory=True parameter under profiler.profile allows you to profile CPU and GPU memory footprint", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Visualizing PyTorch Model Performance using PyTorch Profiler", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Distributed Training \n\nRecent advances in deep learning argue for the value of large datasets and large models, which requires you to scale out model training to more computational resources. Distributed Data Parallel (DDP) and NVIDIA Collective Communications Library (NCCL) are the widely adopted paradigms in PyTorch for accelerating your deep learning training. \n\nIn this release of PyTorch Profiler, DDP with NCCL backend is now supported.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Computation/Communication Overview\n\nIn the Computation/Communication overview under the Distributed training view, you can observe the computation-to-communication ratio of each worker and [load balancer](https://en.wikipedia.org/wiki/Load_balancing_(computing) nodes between worker as measured by granularity. \n\n**Scenario 1**:", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "If the computation and overlapping time of one worker is much larger than the others, this may suggest an issue in the workload balance or worker being a straggler. Computation is the sum of kernel time on GPU minus the overlapping time. The overlapping time is the time saved by interleaving communications during computation. The more overlapping time represents better parallelism between computation and communication. Ideally the computation and communication completely overlap with each other.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Communication is the total communication time minus the overlapping time. The example image below displays how this scenario appears on Tensorboard.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
Figure: A straggler example
\n
\n\n**Scenario 2**:", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "If there is a small batch size (i.e. less computation on each worker) or the data to be transferred is large, the computation-to-communication may also be small and be seen in the profiler with low GPU utilization and long waiting times. This computation/communication view will allow you to diagnose your code to reduce communication by adopting gradient accumulation, or to decrease the communication proportion by increasing batch size. DDP communication time depends on model size. Batch size has no", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "relationship with model size. So increasing batch size could make computation time longer and make computation-to-communication ratio bigger.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Synchronizing/Communication Overview", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "In the Synchronizing/Communication view, you can observe the efficiency of communication. This is done by taking the step time minus computation and communication time. Synchronizing time is part of the total communication time for waiting and synchronizing with other workers. The Synchronizing/Communication view includes initialization, data loader, CPU computation, and so on Insights like what is the ratio of total communication is really used for exchanging data and what is the idle time of waiting for", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "data from other workers can be drawn from this view.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
\n\nFor example, if there is an inefficient workload balance or straggler issue, you\u2019ll be able to identify it in this Synchronizing/Communication view. This view will show several workers\u2019 waiting time being longer than others. \n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "This table view above allows you to see the detailed statistics of all communication ops in each node. This allows you to see what operation types are being called, how many times each op is called, what is the size of the data being transferred by each op, etc.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Memory View:\n\nThis memory view tool helps you understand the hardware resource consumption of the operators in your model. Understanding the time and memory consumption on the operator-level allows you to resolve performance bottlenecks and in turn, allow your model to execute faster. Given limited GPU memory size, optimizing the memory usage can: \n\n1. Allow bigger model which can potentially generalize better on end level tasks.\n2. Allow bigger batch size. Bigger batch sizes increase the training speed.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "The profiler records all the memory allocation during the profiler interval. Selecting the \u201cDevice\u201d will allow you to see each operator\u2019s memory usage on the GPU side or host side. You must enable ```profile_memory=True``` to generate the below memory data as shown [here](https://github.com/pytorch/kineto/blob/master/tb_plugin/examples/resnet50_profiler_api.py#L39). \n\n```\nWith torch.profiler.profile(\nProfiler_memory=True # this will take 1 \u2013 2 minutes to complete. \n)", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "**Important Definitions**:\n\n\u2022\t\u201cSize Increase\u201d displays the sum of all allocation bytes and minus all the memory release bytes.\n\n\u2022\t\u201cAllocation Size\u201d shows the sum of all allocation bytes without considering the memory release.\n\n\u2022\t\u201cSelf\u201d means the allocated memory is not from any child operators, instead by the operator itself.\n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "GPU Metric on Timeline:\n\nThis feature will help you debug performance issues when one or more GPU are underutilized. Ideally, your program should have high GPU utilization (aiming for 100% GPU utilization), minimal CPU to GPU communication, and no overhead.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "**Overview**:\nThe overview page highlights the results of three important GPU usage metrics at different levels (i.e. GPU Utilization, Est. SM Efficiency, and Est. Achieved Occupancy). Essentially, each GPU has a bunch of SM each with a bunch of warps that can execute a bunch of threads concurrently. Warps execute a bunch because the amount depends on the GPU. But at a high level, this GPU Metric on Timeline tool allows you can see the whole stack, which is useful.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "If the GPU utilization result is low, this suggests a potential bottleneck is present in your model. Common reasons: \n\n\u2022Insufficient parallelism in kernels (i.e., low batch size) \n\n\u2022Small kernels called in a loop. This is to say the launch overheads are not amortized \n\n\u2022CPU or I/O bottlenecks lead to the GPU not receiving enough work to keep busy", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Looking of the overview page where the performance recommendation section is where you\u2019ll find potential suggestions on how to increase that GPU utilization. In this example, GPU utilization is low so the performance recommendation was to increase batch size. Increasing batch size 4 to 32, as per the performance recommendation, increased the GPU Utilization by 60.68%.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "GPU Utilization: the step interval time in the profiler when a GPU engine was executing a workload. The high the utilization %, the better. The drawback of using GPU utilization solely to diagnose performance bottlenecks is it is too high-level and coarse. It won\u2019t be able to tell you how many Streaming Multiprocessors are in use. Note that while this metric is useful for detecting periods of idleness, a high value does not indicate efficient use of the GPU, only that it is doing anything at all. For", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "instance, a kernel with a single thread running continuously will get a GPU Utilization of 100%", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Estimated Stream Multiprocessor Efficiency (Est. SM Efficiency) is a finer grained metric, it indicates what percentage of SMs are in use at any point in the trace This metric reports the percentage of time where there is at least one active warp on a SM and those that are stalled (NVIDIA", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "[doc](https://forums.developer.nvidia.com/t/nvprof-question-about-the-sm-efficiency-metric/72640#:~:text=My%20understanding%20from%20the%20profiler%20documentation%20is%20that,that%20%E2%80%9Cactive%20warps%E2%80%9D%20include%20warps%20that%20are%20stalled.)). Est. SM Efficiency also has it\u2019s limitation. For instance, a kernel with only one thread per block can\u2019t fully use each SM. SM Efficiency does not tell us how busy each SM is, only that they are doing anything at all, which can include stalling while", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "waiting on the result of a memory load. To keep an SM busy, it is necessary to have a sufficient number of ready warps that can be run whenever a stall occurs", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Estimated Achieved Occupancy (Est. Achieved Occupancy) is a layer deeper than Est. SM Efficiency and GPU Utilization for diagnosing performance issues. Estimated Achieved Occupancy indicates how many warps can be active at once per SMs. Having a sufficient number of active warps is usually key to achieving good throughput. Unlike GPU Utilization and SM Efficiency, it is not a goal to make this value as high as possible. As a rule of thumb, good throughput gains can be had by improving this metric to 15% and", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "above. But at some point you will hit diminishing returns. If the value is already at 30% for example, further gains will be uncertain. This metric reports the average values of all warp schedulers for the kernel execution period (NVIDIA [doc](https://docs.nvidia.com/gameworks/content/developertools/desktop/analysis/report/cudaexperiments/kernellevel/achievedoccupancy.htm)). The larger the Est. Achieve Occupancy value is the better.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
Overview details: Resnet50_batchsize4
\n
\n\n\n

\n
Overview details: Resnet50_batchsize32
\n
\n\n_Kernel View_\nThe kernel has \u201cBlocks per SM\u201d and \u201cEst. Achieved Occupancy\u201d which is a great tool to compare model runs.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
\n\nMean Blocks per SM: \nBlocks per SM = Blocks of this kernel / SM number of this GPU. If this number is less than 1, it indicates the GPU multiprocessors are not fully utilized. \u201cMean Blocks per SM\u201d is weighted average of all runs of this kernel name, using each run\u2019s duration as weight.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Mean Est. Achieved Occupancy: \nEst. Achieved Occupancy is defined as above in overview. \u201cMean Est. Achieved Occupancy\u201d is weighted average of all runs of this kernel name, using each run\u2019s duration as weight.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "_Trace View_\nThis trace view displays a timeline that shows the duration of operators in your model and which system executed the operation. This view can help you identify whether the high consumption and long execution is because of input or model training. Currently, this trace view shows GPU Utilization and Est. SM Efficiency on a timeline. \n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "GPU utilization is calculated independently and divided into multiple 10 millisecond buckets. The buckets\u2019 GPU utilization values are drawn alongside the timeline between 0 \u2013 100%. In the above example, the \u201cProfilerStep5\u201d GPU utilization during thread 28022\u2019s busy time is higher than the following the one during \u201cOptimizer.step\u201d. This is where you can zoom-in to investigate why that is. \n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "From above, we can see the former\u2019s kernels are longer than the later\u2019s kernels. The later\u2019s kernels are too short in execution, which results in lower GPU utilization. \n\nEst. SM Efficiency: Each kernel has a calculated est. SM efficiency between 0 \u2013 100%. For example, the below kernel has only 64 blocks, while the SMs in this GPU is 80. Then its \u201cEst. SM Efficiency\u201d is 64/80, which is 0.8.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Cloud Storage Support\n\nAfter running pip install tensorboard, to have data be read through these cloud providers, you can now run: \n\n``` sh \ntorch-tb-profiler[blob] \ntorch-tb-profiler[gs] \ntorch-tb-profiler[s3] \n``` \n```pip install torch-tb-profiler[blob]```, ```pip install torch-tb-profiler[gs]```, or ```pip install torch-tb-profiler[S3]``` to have data be read through these cloud providers. For more information, please refer to this [README](https://github.com/pytorch/kineto/tree/main/tb_plugin).", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Jump to Source Code:\n\nOne of the great benefits of having both TensorBoard and the PyTorch Profiler being integrated directly in Visual Studio Code (VS Code) is the ability to directly jump to the source code (file and line) from the profiler stack traces. VS Code Python Extension now [supports TensorBoard Integration](https://devblogs.microsoft.com/python/python-in-visual-studio-code-february-2021-release/).", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Jump to source is ONLY available when Tensorboard is launched within VS Code. Stack tracing will appear on the plugin UI if the profiling with_stack=True. When you click on a stack trace from the PyTorch Profiler, VS Code will automatically open the corresponding file side by side and jump directly to the line of code of interest for you to debug. This allows you to quickly make actionable optimizations and changes to your code based on the profiling results and suggestions.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
Gify: Jump to Source using Visual Studio Code Plug In UI
\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "For how to optimize batch size performance, check out the step-by-step tutorial [here](https://opendatascience.com/optimizing-pytorch-performance-batch-size-with-pytorch-profiler/). PyTorch Profiler is also integrated with [PyTorch Lightning](https://pytorch-lightning.readthedocs.io/en/stable/advanced/profiler.html#pytorch-profiling) and you can simply launch your lightning training jobs with --```trainer.profiler=pytorch``` flag to generate the traces. Check out an example", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "[here](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/basic_examples/profiler_example.py).", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "What\u2019s Next for the PyTorch Profiler?\nYou just saw how PyTorch Profiler can help optimize a model. You can now try the Profiler by ```pip install torch-tb-profiler``` to optimize your PyTorch model. \n\nLook out for an advanced version of this tutorial in the future. We are also thrilled to continue to bring state-of-the-art tool to PyTorch users to improve ML performance. We'd love to hear from you. Feel free to open an issue [here](https://github.com/pytorch/kineto/issues).", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "For new and exciting features coming up with PyTorch Profiler, follow @PyTorch on Twitter and check us out on pytorch.org.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "Acknowledgements\n\nThe author would like to thank the contributions of the following individuals to this piece. From the Facebook side: Geeta Chauhan, Gisle Dankel, Woo Kim, Sam Farahzad, and Mark Saroufim. On the Microsoft side: AI Framework engineers (Teng Gao, Mike Guo, and Yang Gu), Guoliang Hua, and Thuy Nguyen.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'Announcing the Winners of the 2020 Global PyTorch Summer Hackathon'\nauthor: Team PyTorch\n---\n\nMore than 2,500 participants in this year\u2019s Global PyTorch Summer Hackathon pushed the envelope to create unique new tools and applications for PyTorch developers and researchers.\n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "***Notice**: None of the projects submitted to the hackathon are associated with or offered by Facebook, Inc.* \n\nThis year\u2019s projects fell into three categories:\n\n* **PyTorch Developer Tools:** a tool or library for improving productivity and efficiency for PyTorch researchers and developers.\n\n* **Web/Mobile Applications Powered by PyTorch:** a web or mobile interface and/or an embedded device built using PyTorch.", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "* **PyTorch Responsible AI Development Tools:** a tool, library, or web/mobile app to support researchers and developers in creating responsible AI that factors in fairness, security, privacy, and more throughout its entire development process.", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "The virtual hackathon ran from June 22 to August 25, with more than 2,500 registered participants, representing 114 countries from Republic of Azerbaijan, to Zimbabwe, to Japan, submitting a total of 106 projects. Entrants were judged on their idea\u2019s quality, originality, potential impact, and how well they implemented it.\n\nMeet the winners of each category below.", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch Developer Tools\n\n**1st place** - [DeMask](https://devpost.com/software/asteroid-the-pytorch-based-source-separation-toolkit)", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "DeMask is an end-to-end model for enhancing speech while wearing face masks \u2014 offering a clear benefit during times when face masks are mandatory in many spaces and for workers who wear face masks on the job. Built with [Asteroid](https://github.com/mpariente/asteroid), a PyTorch-based audio source separation toolkit, DeMask is trained to recognize distortions in speech created by the muffling from face masks and to adjust the speech to make it sound clearer.", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "This submission stood out in particular because it represents both a high-quality idea and an implementation that can be reproduced by other researchers.\n\nHere is an example on how to train a speech separation model in less than 20 lines:\n\n```python\nfrom torch import optim\nfrom pytorch_lightning import Trainer\n\nfrom asteroid import ConvTasNet\nfrom asteroid.losses import PITLossWrapper\nfrom asteroid.data import LibriMix\nfrom asteroid.engine import System", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "train_loader, val_loader = LibriMix.loaders_from_mini(task='sep_clean', batch_size=4)\nmodel = ConvTasNet(n_src=2)\noptimizer = optim.Adam(model.parameters(), lr=1e-3)\nloss = PITLossWrapper(\n lambda x, y: (x - y).pow(2).mean(-1), # MSE\n pit_from=\"pw_pt\", # Point in the pairwise matrix.\n)\n\nsystem = System(model, optimizer, loss, train_loader, val_loader)\n\ntrainer = Trainer(fast_dev_run=True)\ntrainer.fit(system)", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "**2nd place** - [carefree-learn](https://devpost.com/software/carefree-learn)", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "A PyTorch-based automated machine learning (AutoML) solution, carefree-learn provides high-level APIs to make training models using tabular data sets simpler. It features an interface similar to [scikit-learn](https://scikit-learn.org/stable/) and functions as an end-to-end end pipeline for tabular data sets. It automatically detects feature column types and redundant feature columns, imputes missing values, encodes string columns and categorical columns, and preprocesses numerical columns, among other", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "features.", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "**3rd Place** - [TorchExpo](https://devpost.com/software/torchexpo)", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "TorchExpo is a collection of models and extensions that simplifies taking PyTorch from research to production in mobile devices. This library is more than a web and mobile application, and also comes with a Python library. The Python library is available via pip install and it helps researchers convert a state-of-the-art model in TorchScript and ONNX format in just one line. Detailed docs are available [here](https://torchexpo.readthedocs.io/en/latest/).", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "Web/Mobile Applications Powered by PyTorch\n\n**1st place** - [Q&Aid](https://devpost.com/software/pytorchxai)", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "Q&Aid is a conceptual health-care chatbot aimed at making health-care diagnoses and facilitating communication between patients and doctors. It relies on a series of machine learning models to filter, label, and answer medical questions, based on a medical image and/or questions in text provided by a patient. The transcripts from the chat app then can be forwarded to the local hospitals and the patient will be contacted by one of them to make an appointment to determine proper diagnosis and care. The team", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "hopes that this concept application helps hospitals to work with patients more efficiently and provide proper care.", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
\n\n**2nd place** - [Rasoee](https://devpost.com/software/groundwav)", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "Rasoee is an application that can take images as input and output the name of the dish. It also lists the ingredients and recipe, along with the link to the original recipe online. Additionally, users can choose a cuisine from the list of cuisines in the drop menu, and describe the taste and/or method of preparation in text. Then the application will return matching dishes from the [list of 308 identifiable dishes](https://github.com/arijitgupta42/Rasoee/blob/master/Dishes.txt). The team has put a", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "significant amount of effort gathering and cleaning various datasets to build more accurate and comprehensive models. You can check out the application [here](https://rasoee.herokuapp.com).", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "**3rd place** - [Rexana the Robot \u2014 PyTorch](https://devpost.com/software/rexana-the-robot)", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "Rexana is an AI voice assistant meant to lay the foundation for a physical robot that can complete basic tasks around the house. The system is capable of autonomous navigation (knowing its position around the house relative to landmarks), recognizing voice commands, and object detection and recognition \u2014 meaning it can be commanded to perform various household tasks (e.g., \"Rexana, water the potted plant in the lounge room.\u201d). Rexana can be controlled remotely via a mobile device, and the robot itself", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "features customizable hands (magnets, grippers, etc.) for taking on different jobs.", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch Responsible AI Development Tools\n\n**1st place**: [FairTorch](https://devpost.com/software/a-qeysp1)", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "FairTorch is a fairness library for PyTorch. It lets developers add constraints to their models to equalize metrics across subgroups by simply adding a few lines of code. Model builders can choose a metric definition of fairness for their context, and enforce it at time of training. The library offers a suite of metrics that measure an AI system\u2019s performance among subgroups, and can apply to high-stakes examples where decision-making algorithms are deployed, such as hiring, school admissions, and banking.", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "\n\n**2nd place**: [Fluence](https://devpost.com/software/fluence-5g2s9m)", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
+{"page_content": "In order to facilitate direct comparisons between results published in the literature and further reduce the boilerplate code needed to run experiments with datasets in TorchGeo, we have created PyTorch Lightning [*datamodules*](https://torchgeo.readthedocs.io/en/latest/api/datamodules.html) with well-defined train-val-test splits and [*trainers*](https://torchgeo.readthedocs.io/en/latest/api/trainers.html) for various tasks like classification, regression, and semantic segmentation. These datamodules show how to incorporate augmentations from the kornia library, include preprocessing transforms (with pre-calculated channel statistics), and let users easily experiment with hyperparameters related to the data itself (as opposed to the modeling process). Training a semantic segmentation model on the Inria Aerial Image Labeling dataset is as easy as a few imports and four lines of code.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "```c++\nfrom pytorch_lightning import Trainer\nfrom torchgeo.datamodules import InriaAerialImageLabelingDataModule\nfrom torchgeo.trainers import SemanticSegmentationTask\n\ndatamodule = InriaAerialImageLabelingDataModule(root_dir=\"...\", batch_size=64, num_workers=6)\ntask = SemanticSegmentationTask(segmentation_model=\"unet\", encoder_weights=\"imagenet\", learning_rate=0.1)\ntrainer = Trainer(gpus=1, default_root_dir=\"...\")\n\ntrainer.fit(model=task, datamodule=datamodule)\n```\n\n\n
\n
\n\n\nBuilding segmentations produced by a U-Net model trained on the Inria Aerial Image Labeling dataset. Reproducing these results is as simple as a few imports and four lines of code, making comparison of different models and training techniques simple and easy.\n
", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "In our [preprint](https://arxiv.org/abs/2111.08872) we show a set of results that use the aforementioned datamodules and trainers to benchmark simple modeling approaches for several of the datasets in TorchGeo. For example, we find that a simple ResNet-50 can achieve state-of-the-art performance on the [So2Sat](https://ieeexplore.ieee.org/document/9014553) dataset. These types of baseline results are important for evaluating the contribution of different modeling choices when tackling problems with remotely sensed data.\n\n# Future work and contributing\n\nThere is still a lot of remaining work to be done in order to make TorchGeo as easy to use as possible, especially for users without prior deep learning experience. One of the ways in which we plan to achieve this is by expanding our tutorials to include subjects like \"writing a custom dataset\" and \"transfer learning\", or tasks like \"land cover mapping\" and \"object detection\".", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "Another important project we are working on is pre-training models. Most remote sensing researchers work with very small labeled datasets, and could benefit from pre-trained models and transfer learning approaches. TorchGeo is the first deep learning library to provide models pre-trained on multispectral imagery. Our goal is to provide models for different image modalities (optical, SAR, multispectral) and specific platforms (Landsat, Sentinel, MODIS) as well as benchmark results showing their performance with different amounts of training data. Self-supervised learning is a promising method for training such models. Satellite imagery datasets often contain petabytes of imagery, but accurately labeled datasets are much harder to come by. Self-supervised learning methods will allow us to train directly on the raw imagery without needing large labeled datasets.", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "Aside from these larger projects, we're always looking to add new datasets, data augmentation transforms, and sampling strategies. If you're Python savvy and interested in contributing to TorchGeo, we would love to see contributions! TorchGeo is open source under an MIT license, so you can use it in almost any project.\n\nExternal links:\n\n- **Homepage**: [https://github.com/microsoft/torchgeo](https://github.com/microsoft/torchgeo)\n- **Documentation**: [https://torchgeo.readthedocs.io/](https://torchgeo.readthedocs.io/)\n- **PyPI**: [https://pypi.org/project/torchgeo/](https://pypi.org/project/torchgeo/)\n- **Paper**: [https://arxiv.org/abs/2111.08872](https://arxiv.org/abs/2111.08872)\n\nIf you like TorchGeo, give us a star on GitHub! And if you use TorchGeo in your work, please cite our paper.\n\n# Acknowledgments", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "# Acknowledgments\n\n*We would like to thank all TorchGeo contributors for their efforts in creating the library, the Microsoft AI for Good program for support, and the PyTorch Team for their guidance. This research is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993), the State of Illinois, and as of December, 2019, the National Geospatial-Intelligence Agency. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications. The research was supported in part by NSF grants IIS-1908104, OAC-1934634, and DBI-2021898.*", "metadata": {"source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'What\u2019s New in PyTorch Profiler 1.9?'\nauthor: Sabrina Smai, Program Manager on the AI Framework team at Microsoft\n---\n\nPyTorch Profiler v1.9 has been released! The goal of this new release (previous [PyTorch Profiler release](https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/)) is to provide you with new state-of-the-art tools to help diagnose and fix machine learning performance issues regardless of whether you are working on one or numerous machines. The objective is to target the execution steps that are the most costly in time and/or memory, and visualize the work load distribution between GPUs and CPUs. \n\nHere is a summary of the five major features being released:", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "1.\t**Distributed Training View**: This helps you understand how much time and memory is consumed in your distributed training job. Many issues occur when you take a training model and split the load into worker nodes to be run in parallel as it can be a black box. The overall model goal is to speed up model training. This distributed training view will help you diagnose and debug issues within individual nodes. \n2.\t**Memory View**: This view allows you to understand your memory usage better. This tool will help you avoid the famously pesky Out of Memory error by showing active memory allocations at various points of your program run. \n3.\t**GPU Utilization Visualization**: This tool helps you make sure that your GPU is being fully utilized. \n4.\t**Cloud Storage Support**: Tensorboard plugin can now read profiling data from Azure Blob Storage, Amazon S3, and Google Cloud Platform. \n5.\t**Jump to Source Code**: This feature allows you to visualize stack tracing information and jump directly into the source code. This helps you quickly optimize and iterate on your code based on your profiling results.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "## Getting Started with PyTorch Profiling Tool\nPyTorch includes a profiling functionality called \u00ab PyTorch Profiler \u00bb. The PyTorch Profiler tutorial can be found [here](https://pytorch.org/tutorials/intermediate/tensorboard_profiler_tutorial.html).\n\nTo instrument your PyTorch code for profiling, you must:\n\n$ pip install torch-tb-profiler\n\n```python\nimport torch.profiler as profiler\nWith profiler.profile(XXXX)\n```\n\n**Comments**:\n\n\u2022 For CUDA and CPU profiling, see [below](https://github.com/pytorch/kineto/blob/master/tb_plugin/examples/resnet50_profiler_api.py): \n```\nwith torch.profiler.profile( \nactivities=[ \ntorch.profiler.ProfilerActivity.CPU, \ntorch.profiler.ProfilerActivity.CUDA], \n```\n\n\u2022\tWith profiler.record_function(\u201c$NAME\u201d): allows putting a decorator (a tag associated to a name) for a block of function\n\n\u2022\tProfile_memory=True parameter under profiler.profile allows you to profile CPU and GPU memory footprint\n\n## Visualizing PyTorch Model Performance using PyTorch Profiler\n\n### Distributed Training", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "Recent advances in deep learning argue for the value of large datasets and large models, which requires you to scale out model training to more computational resources. Distributed Data Parallel (DDP) and NVIDIA Collective Communications Library (NCCL) are the widely adopted paradigms in PyTorch for accelerating your deep learning training. \n\nIn this release of PyTorch Profiler, DDP with NCCL backend is now supported.\n\n\n

\n
\n\n### Computation/Communication Overview\n\nIn the Computation/Communication overview under the Distributed training view, you can observe the computation-to-communication ratio of each worker and [load balancer](https://en.wikipedia.org/wiki/Load_balancing_(computing) nodes between worker as measured by granularity. \n\n**Scenario 1**:", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "**Scenario 1**:\n\nIf the computation and overlapping time of one worker is much larger than the others, this may suggest an issue in the workload balance or worker being a straggler. Computation is the sum of kernel time on GPU minus the overlapping time. The overlapping time is the time saved by interleaving communications during computation. The more overlapping time represents better parallelism between computation and communication. Ideally the computation and communication completely overlap with each other. Communication is the total communication time minus the overlapping time. The example image below displays how this scenario appears on Tensorboard. \n\n\n

\n
Figure: A straggler example
\n
\n\n**Scenario 2**:", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "**Scenario 2**:\n\nIf there is a small batch size (i.e. less computation on each worker) or the data to be transferred is large, the computation-to-communication may also be small and be seen in the profiler with low GPU utilization and long waiting times. This computation/communication view will allow you to diagnose your code to reduce communication by adopting gradient accumulation, or to decrease the communication proportion by increasing batch size. DDP communication time depends on model size. Batch size has no relationship with model size. So increasing batch size could make computation time longer and make computation-to-communication ratio bigger. \n\n### Synchronizing/Communication Overview", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "In the Synchronizing/Communication view, you can observe the efficiency of communication. This is done by taking the step time minus computation and communication time. Synchronizing time is part of the total communication time for waiting and synchronizing with other workers. The Synchronizing/Communication view includes initialization, data loader, CPU computation, and so on Insights like what is the ratio of total communication is really used for exchanging data and what is the idle time of waiting for data from other workers can be drawn from this view. \n\n\n

\n
\n\nFor example, if there is an inefficient workload balance or straggler issue, you\u2019ll be able to identify it in this Synchronizing/Communication view. This view will show several workers\u2019 waiting time being longer than others.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "\n

\n
\n\nThis table view above allows you to see the detailed statistics of all communication ops in each node. This allows you to see what operation types are being called, how many times each op is called, what is the size of the data being transferred by each op, etc. \n\n### Memory View:\n\nThis memory view tool helps you understand the hardware resource consumption of the operators in your model. Understanding the time and memory consumption on the operator-level allows you to resolve performance bottlenecks and in turn, allow your model to execute faster. Given limited GPU memory size, optimizing the memory usage can: \n\n1. Allow bigger model which can potentially generalize better on end level tasks.\n2. Allow bigger batch size. Bigger batch sizes increase the training speed.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "The profiler records all the memory allocation during the profiler interval. Selecting the \u201cDevice\u201d will allow you to see each operator\u2019s memory usage on the GPU side or host side. You must enable ```profile_memory=True``` to generate the below memory data as shown [here](https://github.com/pytorch/kineto/blob/master/tb_plugin/examples/resnet50_profiler_api.py#L39). \n\n```\nWith torch.profiler.profile(\nProfiler_memory=True # this will take 1 \u2013 2 minutes to complete. \n)\n```\n\n**Important Definitions**:\n\n\u2022\t\u201cSize Increase\u201d displays the sum of all allocation bytes and minus all the memory release bytes.\n\n\u2022\t\u201cAllocation Size\u201d shows the sum of all allocation bytes without considering the memory release.\n\n\u2022\t\u201cSelf\u201d means the allocated memory is not from any child operators, instead by the operator itself.\n\n\n

\n
\n\n\n### GPU Metric on Timeline:", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "This feature will help you debug performance issues when one or more GPU are underutilized. Ideally, your program should have high GPU utilization (aiming for 100% GPU utilization), minimal CPU to GPU communication, and no overhead. \n\n**Overview**:\nThe overview page highlights the results of three important GPU usage metrics at different levels (i.e. GPU Utilization, Est. SM Efficiency, and Est. Achieved Occupancy). Essentially, each GPU has a bunch of SM each with a bunch of warps that can execute a bunch of threads concurrently. Warps execute a bunch because the amount depends on the GPU. But at a high level, this GPU Metric on Timeline tool allows you can see the whole stack, which is useful. \n\nIf the GPU utilization result is low, this suggests a potential bottleneck is present in your model. Common reasons: \n\n\u2022Insufficient parallelism in kernels (i.e., low batch size) \n\n\u2022Small kernels called in a loop. This is to say the launch overheads are not amortized", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "\u2022CPU or I/O bottlenecks lead to the GPU not receiving enough work to keep busy \n\nLooking of the overview page where the performance recommendation section is where you\u2019ll find potential suggestions on how to increase that GPU utilization. In this example, GPU utilization is low so the performance recommendation was to increase batch size. Increasing batch size 4 to 32, as per the performance recommendation, increased the GPU Utilization by 60.68%.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "GPU Utilization: the step interval time in the profiler when a GPU engine was executing a workload. The high the utilization %, the better. The drawback of using GPU utilization solely to diagnose performance bottlenecks is it is too high-level and coarse. It won\u2019t be able to tell you how many Streaming Multiprocessors are in use. Note that while this metric is useful for detecting periods of idleness, a high value does not indicate efficient use of the GPU, only that it is doing anything at all. For instance, a kernel with a single thread running continuously will get a GPU Utilization of 100%", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "Estimated Stream Multiprocessor Efficiency (Est. SM Efficiency) is a finer grained metric, it indicates what percentage of SMs are in use at any point in the trace This metric reports the percentage of time where there is at least one active warp on a SM and those that are stalled (NVIDIA [doc](https://forums.developer.nvidia.com/t/nvprof-question-about-the-sm-efficiency-metric/72640#:~:text=My%20understanding%20from%20the%20profiler%20documentation%20is%20that,that%20%E2%80%9Cactive%20warps%E2%80%9D%20include%20warps%20that%20are%20stalled.)). Est. SM Efficiency also has it\u2019s limitation. For instance, a kernel with only one thread per block can\u2019t fully use each SM. SM Efficiency does not tell us how busy each SM is, only that they are doing anything at all, which can include stalling while waiting on the result of a memory load. To keep an SM busy, it is necessary to have a sufficient number of ready warps that can be run whenever a stall occurs", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "Estimated Achieved Occupancy (Est. Achieved Occupancy) is a layer deeper than Est. SM Efficiency and GPU Utilization for diagnosing performance issues. Estimated Achieved Occupancy indicates how many warps can be active at once per SMs. Having a sufficient number of active warps is usually key to achieving good throughput. Unlike GPU Utilization and SM Efficiency, it is not a goal to make this value as high as possible. As a rule of thumb, good throughput gains can be had by improving this metric to 15% and above. But at some point you will hit diminishing returns. If the value is already at 30% for example, further gains will be uncertain. This metric reports the average values of all warp schedulers for the kernel execution period (NVIDIA [doc](https://docs.nvidia.com/gameworks/content/developertools/desktop/analysis/report/cudaexperiments/kernellevel/achievedoccupancy.htm)). The larger the Est. Achieve Occupancy value is the better.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "\n

\n
Overview details: Resnet50_batchsize4
\n
\n\n\n

\n
Overview details: Resnet50_batchsize32
\n
\n\n_Kernel View_\nThe kernel has \u201cBlocks per SM\u201d and \u201cEst. Achieved Occupancy\u201d which is a great tool to compare model runs. \n\n\n

\n
\n\nMean Blocks per SM: \nBlocks per SM = Blocks of this kernel / SM number of this GPU. If this number is less than 1, it indicates the GPU multiprocessors are not fully utilized. \u201cMean Blocks per SM\u201d is weighted average of all runs of this kernel name, using each run\u2019s duration as weight.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "Mean Est. Achieved Occupancy: \nEst. Achieved Occupancy is defined as above in overview. \u201cMean Est. Achieved Occupancy\u201d is weighted average of all runs of this kernel name, using each run\u2019s duration as weight. \n\n_Trace View_\nThis trace view displays a timeline that shows the duration of operators in your model and which system executed the operation. This view can help you identify whether the high consumption and long execution is because of input or model training. Currently, this trace view shows GPU Utilization and Est. SM Efficiency on a timeline. \n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "GPU utilization is calculated independently and divided into multiple 10 millisecond buckets. The buckets\u2019 GPU utilization values are drawn alongside the timeline between 0 \u2013 100%. In the above example, the \u201cProfilerStep5\u201d GPU utilization during thread 28022\u2019s busy time is higher than the following the one during \u201cOptimizer.step\u201d. This is where you can zoom-in to investigate why that is. \n\n\n

\n
\n\nFrom above, we can see the former\u2019s kernels are longer than the later\u2019s kernels. The later\u2019s kernels are too short in execution, which results in lower GPU utilization. \n\nEst. SM Efficiency: Each kernel has a calculated est. SM efficiency between 0 \u2013 100%. For example, the below kernel has only 64 blocks, while the SMs in this GPU is 80. Then its \u201cEst. SM Efficiency\u201d is 64/80, which is 0.8.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "\n

\n
\n\n### Cloud Storage Support\n\nAfter running pip install tensorboard, to have data be read through these cloud providers, you can now run: \n\n``` sh \ntorch-tb-profiler[blob] \ntorch-tb-profiler[gs] \ntorch-tb-profiler[s3] \n``` \n```pip install torch-tb-profiler[blob]```, ```pip install torch-tb-profiler[gs]```, or ```pip install torch-tb-profiler[S3]``` to have data be read through these cloud providers. For more information, please refer to this [README](https://github.com/pytorch/kineto/tree/main/tb_plugin). \n\n### Jump to Source Code:", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "One of the great benefits of having both TensorBoard and the PyTorch Profiler being integrated directly in Visual Studio Code (VS Code) is the ability to directly jump to the source code (file and line) from the profiler stack traces. VS Code Python Extension now [supports TensorBoard Integration](https://devblogs.microsoft.com/python/python-in-visual-studio-code-february-2021-release/).\n\nJump to source is ONLY available when Tensorboard is launched within VS Code. Stack tracing will appear on the plugin UI if the profiling with_stack=True. When you click on a stack trace from the PyTorch Profiler, VS Code will automatically open the corresponding file side by side and jump directly to the line of code of interest for you to debug. This allows you to quickly make actionable optimizations and changes to your code based on the profiling results and suggestions.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "\n

\n
Gify: Jump to Source using Visual Studio Code Plug In UI
\n
\n\nFor how to optimize batch size performance, check out the step-by-step tutorial [here](https://opendatascience.com/optimizing-pytorch-performance-batch-size-with-pytorch-profiler/). PyTorch Profiler is also integrated with [PyTorch Lightning](https://pytorch-lightning.readthedocs.io/en/stable/advanced/profiler.html#pytorch-profiling) and you can simply launch your lightning training jobs with --```trainer.profiler=pytorch``` flag to generate the traces. Check out an example [here](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/basic_examples/profiler_example.py). \n\n## What\u2019s Next for the PyTorch Profiler?\nYou just saw how PyTorch Profiler can help optimize a model. You can now try the Profiler by ```pip install torch-tb-profiler``` to optimize your PyTorch model.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "Look out for an advanced version of this tutorial in the future. We are also thrilled to continue to bring state-of-the-art tool to PyTorch users to improve ML performance. We'd love to hear from you. Feel free to open an issue [here](https://github.com/pytorch/kineto/issues). \n\nFor new and exciting features coming up with PyTorch Profiler, follow @PyTorch on Twitter and check us out on pytorch.org. \n\n## Acknowledgements\n\nThe author would like to thank the contributions of the following individuals to this piece. From the Facebook side: Geeta Chauhan, Gisle Dankel, Woo Kim, Sam Farahzad, and Mark Saroufim. On the Microsoft side: AI Framework engineers (Teng Gao, Mike Guo, and Yang Gu), Guoliang Hua, and Thuy Nguyen.", "metadata": {"source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'Announcing the Winners of the 2020 Global PyTorch Summer Hackathon'\nauthor: Team PyTorch\n---\n\nMore than 2,500 participants in this year\u2019s Global PyTorch Summer Hackathon pushed the envelope to create unique new tools and applications for PyTorch developers and researchers.\n\n\n

\n
\n\n***Notice**: None of the projects submitted to the hackathon are associated with or offered by Facebook, Inc.* \n\nThis year\u2019s projects fell into three categories:\n\n* **PyTorch Developer Tools:** a tool or library for improving productivity and efficiency for PyTorch researchers and developers.\n\n* **Web/Mobile Applications Powered by PyTorch:** a web or mobile interface and/or an embedded device built using PyTorch.", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
+{"page_content": "* **PyTorch Responsible AI Development Tools:** a tool, library, or web/mobile app to support researchers and developers in creating responsible AI that factors in fairness, security, privacy, and more throughout its entire development process.\n\nThe virtual hackathon ran from June 22 to August 25, with more than 2,500 registered participants, representing 114 countries from Republic of Azerbaijan, to Zimbabwe, to Japan, submitting a total of 106 projects. Entrants were judged on their idea\u2019s quality, originality, potential impact, and how well they implemented it.\n\nMeet the winners of each category below. \n\n## PyTorch Developer Tools\n\n**1st place** - [DeMask](https://devpost.com/software/asteroid-the-pytorch-based-source-separation-toolkit)", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
+{"page_content": "DeMask is an end-to-end model for enhancing speech while wearing face masks \u2014 offering a clear benefit during times when face masks are mandatory in many spaces and for workers who wear face masks on the job. Built with [Asteroid](https://github.com/mpariente/asteroid), a PyTorch-based audio source separation toolkit, DeMask is trained to recognize distortions in speech created by the muffling from face masks and to adjust the speech to make it sound clearer. \n\nThis submission stood out in particular because it represents both a high-quality idea and an implementation that can be reproduced by other researchers.\n\nHere is an example on how to train a speech separation model in less than 20 lines:\n\n```python\nfrom torch import optim\nfrom pytorch_lightning import Trainer\n\nfrom asteroid import ConvTasNet\nfrom asteroid.losses import PITLossWrapper\nfrom asteroid.data import LibriMix\nfrom asteroid.engine import System", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
+{"page_content": "train_loader, val_loader = LibriMix.loaders_from_mini(task='sep_clean', batch_size=4)\nmodel = ConvTasNet(n_src=2)\noptimizer = optim.Adam(model.parameters(), lr=1e-3)\nloss = PITLossWrapper(\n lambda x, y: (x - y).pow(2).mean(-1), # MSE\n pit_from=\"pw_pt\", # Point in the pairwise matrix.\n)\n\nsystem = System(model, optimizer, loss, train_loader, val_loader)\n\ntrainer = Trainer(fast_dev_run=True)\ntrainer.fit(system)\n```\n\n**2nd place** - [carefree-learn](https://devpost.com/software/carefree-learn)\n\nA PyTorch-based automated machine learning (AutoML) solution, carefree-learn provides high-level APIs to make training models using tabular data sets simpler. It features an interface similar to [scikit-learn](https://scikit-learn.org/stable/) and functions as an end-to-end end pipeline for tabular data sets. It automatically detects feature column types and redundant feature columns, imputes missing values, encodes string columns and categorical columns, and preprocesses numerical columns, among other features.", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
+{"page_content": "**3rd Place** - [TorchExpo](https://devpost.com/software/torchexpo)\n\nTorchExpo is a collection of models and extensions that simplifies taking PyTorch from research to production in mobile devices. This library is more than a web and mobile application, and also comes with a Python library. The Python library is available via pip install and it helps researchers convert a state-of-the-art model in TorchScript and ONNX format in just one line. Detailed docs are available [here](https://torchexpo.readthedocs.io/en/latest/).\n\n## Web/Mobile Applications Powered by PyTorch\n\n**1st place** - [Q&Aid](https://devpost.com/software/pytorchxai)", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
+{"page_content": "Q&Aid is a conceptual health-care chatbot aimed at making health-care diagnoses and facilitating communication between patients and doctors. It relies on a series of machine learning models to filter, label, and answer medical questions, based on a medical image and/or questions in text provided by a patient. The transcripts from the chat app then can be forwarded to the local hospitals and the patient will be contacted by one of them to make an appointment to determine proper diagnosis and care. The team hopes that this concept application helps hospitals to work with patients more efficiently and provide proper care. \n\n\n

\n
\n\n**2nd place** - [Rasoee](https://devpost.com/software/groundwav)", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
+{"page_content": "Rasoee is an application that can take images as input and output the name of the dish. It also lists the ingredients and recipe, along with the link to the original recipe online. Additionally, users can choose a cuisine from the list of cuisines in the drop menu, and describe the taste and/or method of preparation in text. Then the application will return matching dishes from the [list of 308 identifiable dishes](https://github.com/arijitgupta42/Rasoee/blob/master/Dishes.txt). The team has put a significant amount of effort gathering and cleaning various datasets to build more accurate and comprehensive models. You can check out the application [here](https://rasoee.herokuapp.com).\n\n**3rd place** - [Rexana the Robot \u2014 PyTorch](https://devpost.com/software/rexana-the-robot)", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
+{"page_content": "Rexana is an AI voice assistant meant to lay the foundation for a physical robot that can complete basic tasks around the house. The system is capable of autonomous navigation (knowing its position around the house relative to landmarks), recognizing voice commands, and object detection and recognition \u2014 meaning it can be commanded to perform various household tasks (e.g., \"Rexana, water the potted plant in the lounge room.\u201d). Rexana can be controlled remotely via a mobile device, and the robot itself features customizable hands (magnets, grippers, etc.) for taking on different jobs.\n\n## PyTorch Responsible AI Development Tools\n\n**1st place**: [FairTorch](https://devpost.com/software/a-qeysp1)", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
+{"page_content": "FairTorch is a fairness library for PyTorch. It lets developers add constraints to their models to equalize metrics across subgroups by simply adding a few lines of code. Model builders can choose a metric definition of fairness for their context, and enforce it at time of training. The library offers a suite of metrics that measure an AI system\u2019s performance among subgroups, and can apply to high-stakes examples where decision-making algorithms are deployed, such as hiring, school admissions, and banking.\n\n\n\n**2nd place**: [Fluence](https://devpost.com/software/fluence-5g2s9m)", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
{"page_content": "Fluence is a PyTorch-based deep learning library for language research. It specifically addresses the large compute demands of natural language processing (NLP) research. Fluence aims to provide low-resource and computationally efficient algorithms for NLP, giving researchers algorithms that can enhance current NLP methods or help discover where current methods fall short.\n\n**3rd place**: [Causing: CAUSal INterpretation using Graphs](https://devpost.com/software/realrate-explainable-ai-for-company-ratings)", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "Causing (CAUSal INterpretation using Graphs) is a multivariate graphic analysis tool for bringing transparency to neural networks. It explains causality and helps researchers and developers interpret the causal effects of a given equation system to ensure fairness. Developers can input data and a model describing the dependencies between the variables within the data set into Causing, and Causing will output a colored graph of quantified effects acting between the model\u2019s variables. In addition, it also", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "allows developers to estimate these effects to validate whether data fits a model.", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "Thank you,\n\n**The PyTorch team**", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"PyTorch\u2019s Tracing Based Selective Build\"\nauthor: Dhruv Matani, Suraj Subramanian\nfeatured-img: \"/assets/images/pytorchs-tracing-based-selective-build_Figure_4.png\"\n---", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "Introduction\n\n**TL;DR**: It can be challenging to run PyTorch on mobile devices, SBCs (Single Board Computers), and IOT devices. When compiled, the PyTorch library is huge and includes dependencies that might not be needed for the on-device use case.", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "To run a specific set of models on-device, we actually require only a small subset of the features in the PyTorch library. We found that using a PyTorch runtime generated using **selective build** can achieve up to 90% reduction in binary size (for the CPU and QuantizedCPU backends on an x86-64 build on Linux). In this blog, we share our experience of generating model-specific minimal runtimes using Selective Build and show you how to do the same.", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "Why is this important for app developers?\n\nUsing a PyTorch runtime generated by **selective build** can reduce the size of AI-powered apps by 30+ MB - a significant reduction for a typical mobile app! Making mobile applications more lightweight has many benefits - they are runnable on a wider variety of devices, consume less cellular data, and can be downloaded and updated faster on user\u2019s devices.", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "What does the Developer Experience look like?\n\nThis method can work seamlessly with any existing PyTorch Mobile deployment workflows. All you need to do is replace the general PyTorch runtime library with a runtime customized for the specific models you wish to use in your application. The general steps in this process are:", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "1. Build the PyTorch Runtime in **instrumentation mode** (this is called an **instrumentation build** of PyTorch). This will record the used operators, kernels and features.\n2. Run your models through this instrumentation build by using the provided **model_tracer** binary. This will generate a single YAML file that stores all the features used by your model. These features will be preserved in the minimal runtime.", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "3. Build PyTorch using this YAML file as input. This is the **selective build** technique, and it greatly reduces the size of the final PyTorch binary.\n4. Use this selectively-built PyTorch library to reduce the size of your mobile application!", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "Building the PyTorch Runtime in a special **\u201cinstrumentation\u201d mode** ( by passing the `TRACING_BASED=1` build option) generates an **instrumentation build** runtime of PyTorch, along with a **model_tracer** binary. Running a model with this build allows us to trace the parts of PyTorch used by the model.\n\n\n
\n
\n\n\n Figure 1: Instrumentation build of PyTorch\n
", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "```python\n# Clone the PyTorch repo\ngit clone https://github.com/pytorch/pytorch.git\ncd pytorch\n\n# Build the model_tracer\nUSE_NUMPY=0 USE_DISTRIBUTED=0 USE_CUDA=0 TRACING_BASED=1 \\\n python setup.py develop", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "Now this instrumentation build is used to run a model inference with representative inputs. The **model_tracer** binary observes parts of the instrumentation build that were activated during the inference run, and dumps it to a YAML file.\n\n\n
\n
\n\n\n Figure 2: YAML file generated by running model(s) on an instrumentation build\n
", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "```python\n# Generate YAML file\n./build/bin/model_tracer \\\n --model_input_path /tmp/path_to_model.ptl \\\n --build_yaml_path /tmp/selected_ops.yaml", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "Now we build the PyTorch Runtime again, but this time using the YAML file generated by the tracer. The runtime now only includes those parts that are needed for this model. This is called **\u201cSelectively built PyTorch runtime\u201d** in the diagram below.\n\n```python\n# Clean out cached configuration\nmake clean", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "# Build PyTorch using Selected Operators (from the YAML file)\n# using the host toolchain, and use this generated library\nBUILD_PYTORCH_MOBILE_WITH_HOST_TOOLCHAIN=1 \\\nUSE_LIGHTWEIGHT_DISPATCH=0 \\\nBUILD_LITE_INTERPRETER=1 \\\nSELECTED_OP_LIST=/tmp/selected_ops.yaml \\\nTRACING_BASED=1 \\\n ./scripts/build_mobile.sh", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n\n Figure 3: Selective Build of PyTorch and model execution on a selectively built PyTorch runtime\n
", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "Show me the code!\n\nWe\u2019ve put together a [notebook](https://gist.github.com/dhruvbird/65fd800983f362a72d78afe68031568c) to illustrate what the process above looks like in code using a simple PyTorch model. \n\nFor a more hands-on tutorial to deploy this on Android/iOS [this tutorial](https://pytorch.org/tutorials/prototype/tracing_based_selective_build.html) should be helpful.", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "Technical FAQs", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "Why is Tracing needed for a Selective Build of PyTorch?", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "In PyTorch, CPU kernels can call other operators via the [PyTorch Dispatcher](http://blog.ezyang.com/2020/09/lets-talk-about-the-pytorch-dispatcher/). Simply including the set of root operators called directly by the model is not sufficient as there might be many more being called under-the-hood transitively. Running the model on representative inputs and observing the actual list of operators called (aka \u201ctracing\u201d) is the most accurate way of determining what parts of PyTorch are used.", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "Additionally, factors such as which dtypes a kernel should handle are also runtime features that depend on actual input provided to the model. Hence, the tracing mechanism is extremely suitable for this purpose.", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "Which features can be selected (in or out) by using Tracing Based Selective Build?\n\nThe following features can be selected for the PyTorch runtime during the tracing based selective build process:", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "1. [CPU/QuantizedCPU](https://codebrowser.bddppq.com/pytorch/pytorch/build/aten/src/ATen/) kernels for [PyTorch\u2019s ATen Operators](https://pytorch.org/cppdocs/): If a PyTorch Operator is not needed by a model targeted at a selectively built runtime, then the registration of that CPU kernel is omitted in the runtime. This is controlled via [Torchgen code-gen](https://github.com/pytorch/pytorch/blob/master/torchgen/gen.py).", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "2. [Primary Operators](https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/runtime/register_prim_ops.cpp): This is controlled by a macro named [TORCH_SELECTIVE_SCHEMA](https://codebrowser.bddppq.com/pytorch/pytorch/torch/library.h.html) (via templated selective build) that either selects a primary operator or de-selects it based on information in a generated header file.", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "3. Code that handles [specific dtypes](https://codebrowser.bddppq.com/pytorch/pytorch/aten/src/ATen/Dispatch.h.html) in CPU kernels: This is performed by generating exception throws in specific case statements in the switch case generated by the macro [AT_PRIVATE_CHECK_SELECTIVE_BUILD](https://codebrowser.bddppq.com/pytorch/pytorch/aten/src/ATen/Dispatch.h.html#_M/AT_PRIVATE_CHECK_SELECTIVE_BUILD).", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "4. Registration of [Custom C++ Classes](https://pytorch.org/tutorials/advanced/torch_script_custom_classes.html) that extend PyTorch: This is controlled by the macro [TORCH_SELECTIVE_CLASS](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/quantized/cpu/fbgemm_utils.cpp#L385-L386), which can be used when registering Custom C++ Classes. The [torch::selective_class_<>](https://github.com/pytorch/pytorch/blob/master/torch/custom_class.h#L443-L460) helper is to be used in conjunction with the", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "macro [TORCH_SELECTIVE_CLASS](https://codebrowser.bddppq.com/pytorch/pytorch/torch/library.h.html#_M/TORCH_SELECTIVE_CLASS).", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "What is the structure of the YAML file used during the build?\n\nThe YAML file generated after tracing looks like the example below. It encodes all the elements of the \u201cselectable\u201d build feature as specified above.", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "```python\ninclude_all_non_op_selectives: false\nbuild_features: []\noperators:\n aten::add.Tensor:\n is_used_for_training: false\n is_root_operator: true\n include_all_overloads: false\n aten::len.t:\n is_used_for_training: false\n is_root_operator: true\n include_all_overloads: false\nkernel_metadata:\n _local_scalar_dense_cpu:\n - Float\n add_stub:\n - Float\n copy_:\n - Bool\n - Byte\n mul_cpu:\n - Float\ncustom_classes: []\n```", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "How exactly is code eliminated from the generated binary?\n\nDepending on the specific scenario, there are 2 main techniques that are used to hint the compiler and linker about unused and unreachable code. This code is then cleaned up by the compiler or linker as unreachable code.", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "[1] Unreferenced functions removed by the Linker\n\nWhen a function that isn\u2019t transitively referenced from any visible function is present in the compiled object files that are being linked together, the linker will remove it (if the right build flags are provided). This is leveraged in 2 scenarios by the selective build system.", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "Kernel Registration in the Dispatcher\n\nIf an operator\u2019s kernel isn\u2019t needed, then it isn\u2019t registered with the dispatcher. An unregistered kernel means that the function is unreachable, and it will be removed by the linker.", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "Templated Selective Build\n\nThe general idea here is that a class template specialization is used to select a class that either captures a reference to a function or not (depending on whether it\u2019s used) and the linker can come along and clean out the unreferenced function.\n\nFor example, in the code below, there\u2019s no reference to the function \u201c`fn2`\u201d, so it will be cleaned up by the linker since it\u2019s not referenced anywhere.\n\n```python\n#include \n#include ", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "template \nstruct FunctionSelector {\n T fn_;\n FunctionSelector(T fn): fn_(fn) {}\n T get() { return this->fn_; }\n};\n\n// The \"false\" specialization of this class does NOT retain the argument passed\n// to the class constructor, which means that the function pointer passed in\n// is considered to be unreferenced in the program (unless it is referenced\n// elsewhere).\ntemplate \nstruct FunctionSelector {\n FunctionSelector(T) {}\n};", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "template \nFunctionSelector make_function_selector_true(T fn) {\n return FunctionSelector(fn);\n}\n\ntemplate \nFunctionSelector make_function_selector_false(T fn) {\n return FunctionSelector(fn);\n}\n\ntypedef void(*fn_ptr_type)();\n\nstd::vector fns;\n\ntemplate \nvoid add_fn(FunctionSelector fs) {\n fns.push_back(fs.get());\n}\n\ntemplate \nvoid add_fn(FunctionSelector) {\n // Do nothing.\n}", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "// fn1 will be kept by the linker since it is added to the vector \"fns\" at\n// runtime.\nvoid fn1() {\n printf(\"fn1\\n\");\n}\n\n// fn2 will be removed by the linker since it isn't referenced at all.\nvoid fn2() {\n printf(\"fn2\\n\");\n}\n\nint main() {\n add_fn(make_function_selector_true(fn1));\n add_fn(make_function_selector_false(fn2));\n}\n```", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "[2] Dead Code Eliminated by the Compiler\n\nC++ Compilers can detect dead ([unreachable](https://en.wikipedia.org/wiki/Unreachable_code)) code by analyzing the code\u2019s control flow statically. For example, if there\u2019s a code-path that comes after an **unconditional exception throw**, then all the code after it will be marked as dead code and not converted to object code by the compiler. Typically, compilers require the use of the `-fdce` flag to eliminate dead code.", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "In the example below, you can see that the C++ code on the left (in the red boxes) doesn\u2019t have any corresponding generated object code on the right.\n\n\n
\n
\n\n\n Figure 4: Dead Code Elimination by C++ Compilers\n
", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "This property is leveraged in the bodies of PyTorch kernel implementations that have a lot of repeated code to handle multiple dtypes of a Tensor. A [dtype](https://pytorch.org/docs/stable/tensor_attributes.html) is the underlying data-type that the Tensor stores elements of. This can be one of float, double, int64, bool, int8, etc\u2026", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "Almost every PyTorch CPU kernel uses a macro of the form AT_DISPATCH_ALL_TYPES* that is used to substitute some code specialized for every dtype that the kernel needs to handle. For example:\n\n```python\nAT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND3(\n kBool, kHalf, kBFloat16, dtype, \"copy_kernel\", [&] {\n cpu_kernel_vec(\n iter,\n [=](scalar_t a) -> scalar_t { return a; },\n [=](Vectorized a) -> Vectorized { return a; });\n});", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "The macro `AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND3` internally has a switch-case statement that looks like the code in Figure-4 above. The tracing process records the dtypes triggered for the kernel tag \"`copy_kernel`\" and the build process processes these tags and inserts `throw` statements in every `case` statement that is handling the dtype that isn\u2019t required for this kernel tag.\n\nThis is how dtype selectivity is implemented in PyTorch\u2019s Tracing Based Selective Build.", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "Conclusion\n\nTracing Based Selective Build is a practical and scalable approach to selecting only the used parts of an application to retain code that static analysis can not detect. This code is usually extremely data/input dependent in nature.", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
+{"page_content": "Causing (CAUSal INterpretation using Graphs) is a multivariate graphic analysis tool for bringing transparency to neural networks. It explains causality and helps researchers and developers interpret the causal effects of a given equation system to ensure fairness. Developers can input data and a model describing the dependencies between the variables within the data set into Causing, and Causing will output a colored graph of quantified effects acting between the model\u2019s variables. In addition, it also allows developers to estimate these effects to validate whether data fits a model.\n\nThank you,\n\n**The PyTorch team**", "metadata": {"source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"PyTorch\u2019s Tracing Based Selective Build\"\nauthor: Dhruv Matani, Suraj Subramanian\nfeatured-img: \"/assets/images/pytorchs-tracing-based-selective-build_Figure_4.png\"\n---\n\n## Introduction\n\n**TL;DR**: It can be challenging to run PyTorch on mobile devices, SBCs (Single Board Computers), and IOT devices. When compiled, the PyTorch library is huge and includes dependencies that might not be needed for the on-device use case. \n\nTo run a specific set of models on-device, we actually require only a small subset of the features in the PyTorch library. We found that using a PyTorch runtime generated using **selective build** can achieve up to 90% reduction in binary size (for the CPU and QuantizedCPU backends on an x86-64 build on Linux). In this blog, we share our experience of generating model-specific minimal runtimes using Selective Build and show you how to do the same.\n\n## Why is this important for app developers?", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
+{"page_content": "Using a PyTorch runtime generated by **selective build** can reduce the size of AI-powered apps by 30+ MB - a significant reduction for a typical mobile app! Making mobile applications more lightweight has many benefits - they are runnable on a wider variety of devices, consume less cellular data, and can be downloaded and updated faster on user\u2019s devices.\n\n## What does the Developer Experience look like?\n\nThis method can work seamlessly with any existing PyTorch Mobile deployment workflows. All you need to do is replace the general PyTorch runtime library with a runtime customized for the specific models you wish to use in your application. The general steps in this process are:", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
+{"page_content": "1. Build the PyTorch Runtime in **instrumentation mode** (this is called an **instrumentation build** of PyTorch). This will record the used operators, kernels and features.\n2. Run your models through this instrumentation build by using the provided **model_tracer** binary. This will generate a single YAML file that stores all the features used by your model. These features will be preserved in the minimal runtime.\n3. Build PyTorch using this YAML file as input. This is the **selective build** technique, and it greatly reduces the size of the final PyTorch binary.\n4. Use this selectively-built PyTorch library to reduce the size of your mobile application!\n\n\nBuilding the PyTorch Runtime in a special **\u201cinstrumentation\u201d mode** ( by passing the `TRACING_BASED=1` build option) generates an **instrumentation build** runtime of PyTorch, along with a **model_tracer** binary. Running a model with this build allows us to trace the parts of PyTorch used by the model.", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
\n\n\n Figure 1: Instrumentation build of PyTorch\n
\n\n```python\n# Clone the PyTorch repo\ngit clone https://github.com/pytorch/pytorch.git\ncd pytorch\n\n# Build the model_tracer\nUSE_NUMPY=0 USE_DISTRIBUTED=0 USE_CUDA=0 TRACING_BASED=1 \\\n python setup.py develop\n```\n\nNow this instrumentation build is used to run a model inference with representative inputs. The **model_tracer** binary observes parts of the instrumentation build that were activated during the inference run, and dumps it to a YAML file.\n\n\n
\n
\n\n\n Figure 2: YAML file generated by running model(s) on an instrumentation build\n
", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
+{"page_content": "```python\n# Generate YAML file\n./build/bin/model_tracer \\\n --model_input_path /tmp/path_to_model.ptl \\\n --build_yaml_path /tmp/selected_ops.yaml\n```\n\nNow we build the PyTorch Runtime again, but this time using the YAML file generated by the tracer. The runtime now only includes those parts that are needed for this model. This is called **\u201cSelectively built PyTorch runtime\u201d** in the diagram below.\n\n```python\n# Clean out cached configuration\nmake clean\n\n# Build PyTorch using Selected Operators (from the YAML file)\n# using the host toolchain, and use this generated library\nBUILD_PYTORCH_MOBILE_WITH_HOST_TOOLCHAIN=1 \\\nUSE_LIGHTWEIGHT_DISPATCH=0 \\\nBUILD_LITE_INTERPRETER=1 \\\nSELECTED_OP_LIST=/tmp/selected_ops.yaml \\\nTRACING_BASED=1 \\\n ./scripts/build_mobile.sh\n```\n\n\n
\n
\n\n\n Figure 3: Selective Build of PyTorch and model execution on a selectively built PyTorch runtime\n
", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
+{"page_content": "### Show me the code!\n\nWe\u2019ve put together a [notebook](https://gist.github.com/dhruvbird/65fd800983f362a72d78afe68031568c) to illustrate what the process above looks like in code using a simple PyTorch model. \n\nFor a more hands-on tutorial to deploy this on Android/iOS [this tutorial](https://pytorch.org/tutorials/prototype/tracing_based_selective_build.html) should be helpful.\n\n## Technical FAQs\n\n### Why is Tracing needed for a Selective Build of PyTorch?\n\nIn PyTorch, CPU kernels can call other operators via the [PyTorch Dispatcher](http://blog.ezyang.com/2020/09/lets-talk-about-the-pytorch-dispatcher/). Simply including the set of root operators called directly by the model is not sufficient as there might be many more being called under-the-hood transitively. Running the model on representative inputs and observing the actual list of operators called (aka \u201ctracing\u201d) is the most accurate way of determining what parts of PyTorch are used.", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
+{"page_content": "Additionally, factors such as which dtypes a kernel should handle are also runtime features that depend on actual input provided to the model. Hence, the tracing mechanism is extremely suitable for this purpose.\n\n### Which features can be selected (in or out) by using Tracing Based Selective Build?\n\nThe following features can be selected for the PyTorch runtime during the tracing based selective build process:", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
+{"page_content": "1. [CPU/QuantizedCPU](https://codebrowser.bddppq.com/pytorch/pytorch/build/aten/src/ATen/) kernels for [PyTorch\u2019s ATen Operators](https://pytorch.org/cppdocs/): If a PyTorch Operator is not needed by a model targeted at a selectively built runtime, then the registration of that CPU kernel is omitted in the runtime. This is controlled via [Torchgen code-gen](https://github.com/pytorch/pytorch/blob/master/torchgen/gen.py).\n2. [Primary Operators](https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/runtime/register_prim_ops.cpp): This is controlled by a macro named [TORCH_SELECTIVE_SCHEMA](https://codebrowser.bddppq.com/pytorch/pytorch/torch/library.h.html) (via templated selective build) that either selects a primary operator or de-selects it based on information in a generated header file.\n3. Code that handles [specific dtypes](https://codebrowser.bddppq.com/pytorch/pytorch/aten/src/ATen/Dispatch.h.html) in CPU kernels: This is performed by generating exception throws in specific case statements in the switch case generated by the macro [AT_PRIVATE_CHECK_SELECTIVE_BUILD](https://codebrowser.bddppq.com/pytorch/pytorch/aten/src/ATen/Dispatch.h.html#_M/AT_PRIVATE_CHECK_SELECTIVE_BUILD).\n4. Registration of [Custom C++ Classes](https://pytorch.org/tutorials/advanced/torch_script_custom_classes.html) that extend PyTorch: This is controlled by the macro [TORCH_SELECTIVE_CLASS](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/quantized/cpu/fbgemm_utils.cpp#L385-L386), which can be used when registering Custom C++ Classes. The [torch::selective_class_<>](https://github.com/pytorch/pytorch/blob/master/torch/custom_class.h#L443-L460) helper is to be used in conjunction with the macro [TORCH_SELECTIVE_CLASS](https://codebrowser.bddppq.com/pytorch/pytorch/torch/library.h.html#_M/TORCH_SELECTIVE_CLASS).", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
+{"page_content": "### What is the structure of the YAML file used during the build?\n\nThe YAML file generated after tracing looks like the example below. It encodes all the elements of the \u201cselectable\u201d build feature as specified above.\n\n```python\ninclude_all_non_op_selectives: false\nbuild_features: []\noperators:\n aten::add.Tensor:\n is_used_for_training: false\n is_root_operator: true\n include_all_overloads: false\n aten::len.t:\n is_used_for_training: false\n is_root_operator: true\n include_all_overloads: false\nkernel_metadata:\n _local_scalar_dense_cpu:\n - Float\n add_stub:\n - Float\n copy_:\n - Bool\n - Byte\n mul_cpu:\n - Float\ncustom_classes: []\n```\n\n### How exactly is code eliminated from the generated binary?\n\nDepending on the specific scenario, there are 2 main techniques that are used to hint the compiler and linker about unused and unreachable code. This code is then cleaned up by the compiler or linker as unreachable code.", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
+{"page_content": "#### [1] Unreferenced functions removed by the Linker\n\nWhen a function that isn\u2019t transitively referenced from any visible function is present in the compiled object files that are being linked together, the linker will remove it (if the right build flags are provided). This is leveraged in 2 scenarios by the selective build system.\n\n##### Kernel Registration in the Dispatcher\n\nIf an operator\u2019s kernel isn\u2019t needed, then it isn\u2019t registered with the dispatcher. An unregistered kernel means that the function is unreachable, and it will be removed by the linker.\n\n##### Templated Selective Build\n\nThe general idea here is that a class template specialization is used to select a class that either captures a reference to a function or not (depending on whether it\u2019s used) and the linker can come along and clean out the unreferenced function.\n\nFor example, in the code below, there\u2019s no reference to the function \u201c`fn2`\u201d, so it will be cleaned up by the linker since it\u2019s not referenced anywhere.", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
+{"page_content": "```python\n#include \n#include \n\ntemplate \nstruct FunctionSelector {\n T fn_;\n FunctionSelector(T fn): fn_(fn) {}\n T get() { return this->fn_; }\n};\n\n// The \"false\" specialization of this class does NOT retain the argument passed\n// to the class constructor, which means that the function pointer passed in\n// is considered to be unreferenced in the program (unless it is referenced\n// elsewhere).\ntemplate \nstruct FunctionSelector {\n FunctionSelector(T) {}\n};\n\ntemplate \nFunctionSelector make_function_selector_true(T fn) {\n return FunctionSelector(fn);\n}\n\ntemplate \nFunctionSelector make_function_selector_false(T fn) {\n return FunctionSelector(fn);\n}\n\ntypedef void(*fn_ptr_type)();\n\nstd::vector fns;\n\ntemplate \nvoid add_fn(FunctionSelector fs) {\n fns.push_back(fs.get());\n}", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
+{"page_content": "template \nvoid add_fn(FunctionSelector) {\n // Do nothing.\n}\n\n// fn1 will be kept by the linker since it is added to the vector \"fns\" at\n// runtime.\nvoid fn1() {\n printf(\"fn1\\n\");\n}\n\n// fn2 will be removed by the linker since it isn't referenced at all.\nvoid fn2() {\n printf(\"fn2\\n\");\n}\n\nint main() {\n add_fn(make_function_selector_true(fn1));\n add_fn(make_function_selector_false(fn2));\n}\n```\n\n#### [2] Dead Code Eliminated by the Compiler\n\nC++ Compilers can detect dead ([unreachable](https://en.wikipedia.org/wiki/Unreachable_code)) code by analyzing the code\u2019s control flow statically. For example, if there\u2019s a code-path that comes after an **unconditional exception throw**, then all the code after it will be marked as dead code and not converted to object code by the compiler. Typically, compilers require the use of the `-fdce` flag to eliminate dead code.", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
+{"page_content": "In the example below, you can see that the C++ code on the left (in the red boxes) doesn\u2019t have any corresponding generated object code on the right.\n\n\n
\n
\n\n\n Figure 4: Dead Code Elimination by C++ Compilers\n
\n\nThis property is leveraged in the bodies of PyTorch kernel implementations that have a lot of repeated code to handle multiple dtypes of a Tensor. A [dtype](https://pytorch.org/docs/stable/tensor_attributes.html) is the underlying data-type that the Tensor stores elements of. This can be one of float, double, int64, bool, int8, etc\u2026\n\nAlmost every PyTorch CPU kernel uses a macro of the form AT_DISPATCH_ALL_TYPES* that is used to substitute some code specialized for every dtype that the kernel needs to handle. For example:", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
+{"page_content": "```python\nAT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND3(\n kBool, kHalf, kBFloat16, dtype, \"copy_kernel\", [&] {\n cpu_kernel_vec(\n iter,\n [=](scalar_t a) -> scalar_t { return a; },\n [=](Vectorized a) -> Vectorized { return a; });\n});\n```\n\nThe macro `AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND3` internally has a switch-case statement that looks like the code in Figure-4 above. The tracing process records the dtypes triggered for the kernel tag \"`copy_kernel`\" and the build process processes these tags and inserts `throw` statements in every `case` statement that is handling the dtype that isn\u2019t required for this kernel tag.\n\nThis is how dtype selectivity is implemented in PyTorch\u2019s Tracing Based Selective Build.\n\n## Conclusion\n\nTracing Based Selective Build is a practical and scalable approach to selecting only the used parts of an application to retain code that static analysis can not detect. This code is usually extremely data/input dependent in nature.", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
{"page_content": "This article provides detailed insights into how Tracing Based Selective Build works under the hood, and the technical details related to its implementation. These techniques can also be applied to other applications and situations that can benefit from reduced binary size.", "metadata": {"source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"PyTorch & OpenXLA: The Path Forward\"\nauthor: Milad Mohammadi, Jack Cao, Shauheen Zahirazami, Joe Spisak, and Jiewen Tan \n---", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla-path-forward/", "category": "pytorch blogs"}}
-{"page_content": "As we celebrate the release of [OpenXLA](https://opensource.googleblog.com/2023/03/openxla-is-ready-to-accelerate-and-simplify-ml-development.html), [PyTorch 2.0](https://pytorch.org/blog/pytorch-2.0-release/), and [PyTorch/XLA 2.0](https://pytorch.org/blog/pytorch-2.0-xla/), it\u2019s worth taking a step back and sharing where we see it all going in the short to medium term. With PyTorch adoption leading in the AI space and XLA supporting best-in-class compiler features, PyTorch/XLA is well positioned to", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla-path-forward/", "category": "pytorch blogs"}}
-{"page_content": "provide a cutting edge development stack for both model training and inference. To achieve this, we see investments in three main areas:", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla-path-forward/", "category": "pytorch blogs"}}
-{"page_content": "* **Training Large Models** - Large language models (LLM) and diffusion models have quickly risen in popularity and many cutting edge applications today are built on them. Further to this, training these models requires scale and more specifically the ability to train across thousands of accelerators. To achieve this we are investing in features such as AMP for mixed precision training, PjRt for increased runtime performance, SPMD / FSDP for efficient model sharding, Dynamic Shapes to enable new research", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla-path-forward/", "category": "pytorch blogs"}}
-{"page_content": "approaches, faster data loading through Ray and tf.data, and a toolchain that packages all of these features together into a seamless workflow. Some of these features are already available in experimental or beta stages, and others are coming up this year with many heavily leveraging the underlying OpenXLA compiler stack.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla-path-forward/", "category": "pytorch blogs"}}
-{"page_content": "* **Model Inference** - With large models continuing to grow in size and computational cost, deployment becomes the next challenge as these models continue to find their way into applications. With the introduction of Dynamo in the PyTorch 2.0 release, PyTorch/XLA delivers performance competitive inference. We are, however, incorporating additional inference-oriented including model serving support, Dynamo for sharded large models, quantization via Torch.Export and StableHLO.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla-path-forward/", "category": "pytorch blogs"}}
-{"page_content": "* **Ecosystem integration** - We are expanding integration with [Hugging Face](https://huggingface.co/) and [PyTorch Lightning](https://lightning.ai/docs/pytorch/stable/) so users can take advantage of upcoming PyTorch/XLA cutting edge features (e.g. FSDP support in Hugging Face) and the downstream OpenXLA features (e.g. Quantization) through familiar APIs.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla-path-forward/", "category": "pytorch blogs"}}
-{"page_content": "Additionally, PyTorch/XLA is set to migrate to the open source [OpenXLA](https://github.com/openxla) as its default downstream compiler; allowing the PyTorch community to gain access to a leading, framework-agnostic compiler stack that enjoys industry-wide contribution and innovation. To achieve this, we will begin supporting StableHLO. As a result, OpenXLA will replace the existing TF:XLA dependency, overall streamlining the dependencies and creating leverage from the broader compiler ecosystem.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla-path-forward/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch/XLA will also sunset the XRT runtime after migration. You can see the resulting high level stack below with the TensorFlow dependency stricken out:", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla-path-forward/", "category": "pytorch blogs"}}
-{"page_content": "{:style=\"max-height:800px; width:100%\"} \n\n**Figure:** the upcoming PyTorch/XLA features and integrations are illustrated here", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla-path-forward/", "category": "pytorch blogs"}}
-{"page_content": "We cannot be more excited about what\u2019s ahead for PyTorch/XLA and invite the community to join us. PyTorch/XLA is developed fully in open source so please file issues, submit pull requests, and send RFCs to [GitHub](https://github.com/pytorch/xla) such that we can openly collaborate. You can also [try out](https://colab.sandbox.google.com/github/pytorch/xla/blob/master/contrib/colab/getting-started.ipynb) PyTorch/XLA for yourself on various XLA devices including TPUs and GPUs.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla-path-forward/", "category": "pytorch blogs"}}
-{"page_content": "Cheers, \nThe PyTorch/XLA Team at Google", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla-path-forward/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"Scaling Multimodal Foundation Models in TorchMultimodal with Pytorch Distributed\"\nauthor: Ankita De, Edward Wang (EcoF), Rohan Varma, Anjali Sridhar, Kartikay Khandelwal\nfeatured-img: \"/assets/images/scaling-multimodal-image1-diagram-of-multimodal-flava-new.png\"\n---", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "Introduction", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "In recent years, scaling model sizes has become a promising area of research. In the field of NLP, language models have gone from hundreds of millions of parameters (BERT) to hundreds of billions of parameters (GPT-3) demonstrating significant improvements on downstream tasks. The [scaling laws](https://arxiv.org/pdf/2001.08361.pdf) for large scale language models have also been studied extensively in the industry. A similar trend can be observed in the vision field, with the community moving to transformer", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "based models (like [Vision Transformer](https://arxiv.org/pdf/2010.11929.pdf), [Masked Auto Encoders](https://arxiv.org/pdf/2111.06377.pdf)) as well. It is clear that individual modalities - text, image, video - have benefited massively from recent advancements in scale, and frameworks have quickly adapted to accommodate larger models.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"PyTorch & OpenXLA: The Path Forward\"\nauthor: Milad Mohammadi, Jack Cao, Shauheen Zahirazami, Joe Spisak, and Jiewen Tan \n---\n\nAs we celebrate the release of [OpenXLA](https://opensource.googleblog.com/2023/03/openxla-is-ready-to-accelerate-and-simplify-ml-development.html), [PyTorch 2.0](https://pytorch.org/blog/pytorch-2.0-release/), and [PyTorch/XLA 2.0](https://pytorch.org/blog/pytorch-2.0-xla/), it\u2019s worth taking a step back and sharing where we see it all going in the short to medium term. With PyTorch adoption leading in the AI space and XLA supporting best-in-class compiler features, PyTorch/XLA is well positioned to provide a cutting edge development stack for both model training and inference. To achieve this, we see investments in three main areas:", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla-path-forward/", "category": "pytorch blogs"}}
+{"page_content": "* **Training Large Models** - Large language models (LLM) and diffusion models have quickly risen in popularity and many cutting edge applications today are built on them. Further to this, training these models requires scale and more specifically the ability to train across thousands of accelerators. To achieve this we are investing in features such as AMP for mixed precision training, PjRt for increased runtime performance, SPMD / FSDP for efficient model sharding, Dynamic Shapes to enable new research approaches, faster data loading through Ray and tf.data, and a toolchain that packages all of these features together into a seamless workflow. Some of these features are already available in experimental or beta stages, and others are coming up this year with many heavily leveraging the underlying OpenXLA compiler stack.\n* **Model Inference** - With large models continuing to grow in size and computational cost, deployment becomes the next challenge as these models continue to find their way into applications. With the introduction of Dynamo in the PyTorch 2.0 release, PyTorch/XLA delivers performance competitive inference. We are, however, incorporating additional inference-oriented including model serving support, Dynamo for sharded large models, quantization via Torch.Export and StableHLO.\n* **Ecosystem integration** - We are expanding integration with [Hugging Face](https://huggingface.co/) and [PyTorch Lightning](https://lightning.ai/docs/pytorch/stable/) so users can take advantage of upcoming PyTorch/XLA cutting edge features (e.g. FSDP support in Hugging Face) and the downstream OpenXLA features (e.g. Quantization) through familiar APIs.", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla-path-forward/", "category": "pytorch blogs"}}
+{"page_content": "Additionally, PyTorch/XLA is set to migrate to the open source [OpenXLA](https://github.com/openxla) as its default downstream compiler; allowing the PyTorch community to gain access to a leading, framework-agnostic compiler stack that enjoys industry-wide contribution and innovation. To achieve this, we will begin supporting StableHLO. As a result, OpenXLA will replace the existing TF:XLA dependency, overall streamlining the dependencies and creating leverage from the broader compiler ecosystem. PyTorch/XLA will also sunset the XRT runtime after migration. You can see the resulting high level stack below with the TensorFlow dependency stricken out:\n\n{:style=\"max-height:800px; width:100%\"} \n\n**Figure:** the upcoming PyTorch/XLA features and integrations are illustrated here", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla-path-forward/", "category": "pytorch blogs"}}
+{"page_content": "We cannot be more excited about what\u2019s ahead for PyTorch/XLA and invite the community to join us. PyTorch/XLA is developed fully in open source so please file issues, submit pull requests, and send RFCs to [GitHub](https://github.com/pytorch/xla) such that we can openly collaborate. You can also [try out](https://colab.sandbox.google.com/github/pytorch/xla/blob/master/contrib/colab/getting-started.ipynb) PyTorch/XLA for yourself on various XLA devices including TPUs and GPUs.\n\nCheers, \nThe PyTorch/XLA Team at Google", "metadata": {"source": "https://pytorch.org/blog/pytorch-2.0-xla-path-forward/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"Scaling Multimodal Foundation Models in TorchMultimodal with Pytorch Distributed\"\nauthor: Ankita De, Edward Wang (EcoF), Rohan Varma, Anjali Sridhar, Kartikay Khandelwal\nfeatured-img: \"/assets/images/scaling-multimodal-image1-diagram-of-multimodal-flava-new.png\"\n---\n\n## Introduction", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
+{"page_content": "## Introduction\n\nIn recent years, scaling model sizes has become a promising area of research. In the field of NLP, language models have gone from hundreds of millions of parameters (BERT) to hundreds of billions of parameters (GPT-3) demonstrating significant improvements on downstream tasks. The [scaling laws](https://arxiv.org/pdf/2001.08361.pdf) for large scale language models have also been studied extensively in the industry. A similar trend can be observed in the vision field, with the community moving to transformer based models (like [Vision Transformer](https://arxiv.org/pdf/2010.11929.pdf), [Masked Auto Encoders](https://arxiv.org/pdf/2111.06377.pdf)) as well. It is clear that individual modalities - text, image, video - have benefited massively from recent advancements in scale, and frameworks have quickly adapted to accommodate larger models.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
{"page_content": "At the same time, multimodality is becoming increasingly important in research with tasks like image-text retrieval, visual question-answering, visual dialog and text to image generation gaining traction in real world applications. Training large scale multimodal models is the natural next step and we already see several efforts in this area like [CLIP](https://openai.com/blog/clip/) from OpenAI, [Parti](https://parti.research.google/) from Google and [CM3](https://arxiv.org/pdf/2201.07520.pdf) from Meta.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "In this blog, we present a case study demonstrating the scaling of [FLAVA](https://flava-model.github.io/) to 10B params using techniques from PyTorch Distributed. FLAVA is a vision and language foundation model, available in [TorchMultimodal](https://github.com/facebookresearch/multimodal/tree/main/torchmultimodal/models/flava), which has shown competitive performance on both unimodal and multimodal benchmarks. We also give the relevant code pointers in this blog. The instructions for running an example", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "script to scale FLAVA can be found [here](https://github.com/facebookresearch/multimodal/tree/main/examples/flava/native).", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "Scaling FLAVA Overview", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "FLAVA is a foundation multimodal model which consists of transformer based image and text encoders followed by a transformer-based multimodal fusion module. It is pretrained on both unimodal and multimodal data with a diverse set of losses. This includes masked language, image and multimodal modeling losses that require the model to reconstruct the original input from its context (self-supervised learning). It also uses image text matching loss over positive and negative examples of aligned image-text pairs", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "as well as CLIP style contrastive loss. In addition to multimodal tasks (like image-text retrieval), FLAVA demonstrated competitive performance on unimodal benchmarks as well (GLUE tasks for NLP and image classification for vision).", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\nThe original FLAVA model has ~350M parameters and uses ViT-B16 configurations (from the [Vision Transformer paper](https://arxiv.org/pdf/2010.11929.pdf)) for image and text encoders. The multimodal fusion transformer follows the unimodal encoders but with half the number of layers. We explore increasing the size of each encoder to larger ViT variants.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "Another aspect of scaling is adding the ability to increase the batch size. FLAVA makes use of contrastive loss over in-batch negatives, which typically benefits from large batch size (as studied [here](https://openreview.net/pdf?id=U2exBrf_SJh)). The largest training efficiency or throughput is also generally achieved when operating near maximum possible batch sizes as determined by the amount of GPU memory available (also see the experiments section).", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "The following table displays the different model configurations we experimented with. We also determine the maximum batch size that was able to fit in memory for each configuration in the experiments section.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "| Approx Model params | Hidden size | MLP size | Heads | Unimodal layers | Multimodal layers | Model size (fp32) |\n|-----------------------|---------------|----------|---------|-------------------|---------------------|---------------------|\n| 350M (original) | 768 | 3072 | 12 | 12 | 6 | 1.33GB |\n| 900M | 1024 | 4096 | 16 | 24 | 12 | 3.48GB |", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "| 1.8B | 1280 | 5120 | 16 | 32 | 16 | 6.66GB |\n| 2.7B | 1408 | 6144 | 16 | 40 | 20 | 10.3GB |\n| 4.8B | 1664 | 8192 | 16 | 48 | 24 | 18.1GB |\n| 10B | 2048 | 10240 | 16 | 64 | 40 | 38GB |", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "Optimization overview\n\nPyTorch offers several native techniques to efficiently scale models. In the following sections, we go over some of these techniques and show how they can be applied to scale up a FLAVA model to 10 billion parameters.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "Distributed Data Parallel\n\nA common starting point for distributed training is data parallelism. Data parallelism replicates the model across each worker (GPU), and partitions the dataset across the workers. Different workers process different data partitions in parallel and synchronize their gradients (via all reduce) before model weights are updated. The figure below showcases the flow (forward, backward, and weight update steps) for processing a single example for data parallelism:", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n\n Source: https://engineering.fb.com/2021/07/15/open-source/fsdp/\n
", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch provides a native API, [DistributedDataParallel](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html) (DDP) to enable data parallelism which can be used as a module wrapper as showcased below. Please see PyTorch Distributed [documentation](https://pytorch.org/docs/stable/distributed.html#) for more details.\n\n```Python\nfrom torchmultimodal.models.flava.model import flava_model_for_pretraining\nimport torch\nimport torch.distributed as dist", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "model = flava_model_for_pretraining().cuda()\n# Initialize PyTorch Distributed process groups\n# Please see https://pytorch.org/tutorials/intermediate/dist_tuto.html for details\ndist.init_process_group(backend=\u201dnccl\u201d)\n# Wrap model in DDP\nmodel = torch.nn.parallel.DistributedDataParallel(model, device_ids=[torch.cuda.current_device()])\n```", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "Fully Sharded Data Parallel\n\nGPU memory usage of a training application can roughly be broken down into model inputs, intermediate activations (needed for gradient computation), model parameters, gradients, and optimizer states. Scaling a model will typically increase each of these elements. Scaling a model with DDP can eventually result in out-of-memory issues when a single GPU's memory becomes insufficient since it replicates the parameters, gradients, and optimizer states on all workers.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "To reduce this replication and save GPU memory, we can shard the model parameters, gradients, and optimizer states across all workers with each worker only managing a single shard. This technique was popularized by the [ZeRO-3](https://arxiv.org/abs/1910.02054) approach developed by Microsoft. A PyTorch-native implementation of this approach is available as [FullyShardedDataParallel](https://pytorch.org/docs/stable/fsdp.html) (FSDP) API, released as a beta feature in PyTorch 1.12. During a module\u2019s forward", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "and backward passes, FSDP unshards the model parameters as needed for computation (using all-gather) and reshards them after computation. It synchronizes gradients using the reduce-scatter collective to ensure sharded gradients are globally averaged. The forward and backward pass flow of a model wrapped in FSDP are detailed below:", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n\n Source: https://engineering.fb.com/2021/07/15/open-source/fsdp/\n
", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "To use FSDP, the submodules of a model need to be wrapped with the API to control when specific submodules are sharded or unsharded. FSDP provides an auto-wrapping API (see the [auto_wrap_policy](https://pytorch.org/docs/stable/fsdp.html#torch.distributed.fsdp.FullyShardedDataParallel) argument) that can be used out of the box as well as several [wrapping policies](https://github.com/pytorch/pytorch/blob/master/torch/distributed/fsdp/wrap.py) and the ability to [write your own", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "policy](https://github.com/pytorch/pytorch/blob/75c0e3a471c19b883feca15fd4ecfabedf746691/torch/distributed/fsdp/fully_sharded_data_parallel.py#L858).", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "The following example demonstrates wrapping the FLAVA model with FSDP. We specify the auto-wrapping policy as `transformer_auto_wrap_policy`. This will wrap individual transformer layers (`TransformerEncoderLayer`), the image transformer (`ImageTransformer`), text encoder (`BERTTextEncoder`) and multimodal encoder (`FLAVATransformerWithoutEmbeddings`) as individual FSDP units. This uses a recursive wrapping approach for efficient memory management. For example, after an individual transformer layer\u2019s", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "forward or backward pass is finished, its parameters are discarded, freeing up memory thereby reducing peak memory usage.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "FSDP also provides a number of configurable options to tune the performance of applications. For example, in our use case, we illustrate the use of the new `limit_all_gathers` flag, which prevents all-gathering model parameters too early thereby alleviating memory pressure on the application. We encourage users to experiment with this flag which can potentially improve the performance of applications with high active memory usage.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "```Python\nimport torch\nfrom torch.distributed.fsdp import FullyShardedDataParallel as FSDP\nfrom torch.distributed.fsdp.wrap import transformer_auto_wrap_policy\nfrom torchmultimodal.models.flava.model import flava_model_for_pretraining\nfrom torchmultimodal.models.flava.text_encoder import BertTextEncoder\nfrom torchmultimodal.models.flava.image_encoder import ImageTransformer\nfrom torchmultimodal.models.flava.transformer import FLAVATransformerWithoutEmbeddings", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "from torchmultimodal.modules.layers.transformer import TransformerEncoderLayer", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "model = flava_model_for_pretraining().cuda()\ndist.init_process_group(backend=\u201dnccl\u201d)", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "model = FSDP(\n model,\n device_id=torch.cuda.current_device(),\n auto_wrap_policy=partial(\n transformer_auto_wrap_policy,\n transformer_layer_cls={\n TransformerEncoderLayer,\n ImageTransformer,\n BERTTextEncoder,\n FLAVATransformerWithoutEmbeddings\n },\n ),\n limit_all_gathers=True,\n )\n```", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "Activation Checkpointing", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "As discussed above, intermediate activations, model parameters, gradients, and optimizer states contribute to the overall GPU memory usage. FSDP can reduce memory consumption due to the latter three but does not reduce memory consumed by activations. Memory used by activations increases with increase in batch size or number of hidden layers. Activation checkpointing is a technique to decrease this memory usage by recomputing the activations during the backward pass instead of holding them in memory for a", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "specific checkpointed module. For example, we observed ~4x reduction in the peak active memory after forward pass by applying activation checkpointing to the 2.7B parameter model.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch offers a wrapper based activation checkpointing API. In particular, `checkpoint_wrapper` allows users to wrap an individual module with checkpointing, and `apply_activation_checkpointing` allows users to specify a policy with which to wrap modules within an overall module with checkpointing. Both these APIs can be applied to most models as they do not require any modifications to the model definition code. However, if more granular control over checkpointed segments, such as checkpointing specific", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "functions within a module, is required, the functional `torch.utils.checkpoint` [API](https://pytorch.org/docs/stable/checkpoint.html) can be leveraged, although this requires modification to the model code. The application of the activation checkpointing wrapper to individual FLAVA transformer layers (denoted by `TransformerEncoderLayer`) is shown below. For a thorough description of activation checkpointing, please see the description in the [PyTorch", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "documentation](https://pytorch.org/docs/stable/checkpoint.html).", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "```Python\nfrom torchmultimodal.models.flava.model import flava_model_for_pretraining\nfrom torch.distributed.algorithms._checkpoint.checkpoint_wrapper import apply_activation_checkpointing, checkpoint_wrapper, CheckpointImpl\nfrom torchmultimodal.modules.layers.transformer import TransformerEncoderLayer\n\nmodel = flava_model_for_pretraining()\ncheckpoint_tformer_layers_policy = lambda submodule: isinstance(submodule, TransformerEncoderLayer)", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "apply_activation_checkpointing(\n model,\n checkpoint_wrapper_fn=checkpoint_wrapper,\n check_fn=checkpoint_tformer_layers_policy,\n )\n```\nUsed together, wrapping FLAVA transformer layers with activation checkpointing and wrapping the overall model with FSDP as demonstrated above, we are able to scale FLAVA to 10B parameters.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "Experiments", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "We conduct an empirical study about the impact of the different optimizations from the previous section on system performance. For all our experiments, we use a single node with 8 A100 40GB GPUs and run the pretraining for 1000 iterations. All runs also used PyTorch\u2019s [automatic mixed precision](https://pytorch.org/docs/stable/amp.html) with the bfloat16 data type. [TensorFloat32](https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices) format is also enabled to improve matmul", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "performance on the A100. We define throughput as the average number of items (text or image) processed per second (we ignore the first 100 iterations while measuring throughput to account for warmup). We leave training to convergence and its impact on downstream task metrics as an area for future study.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
+{"page_content": "In this blog, we present a case study demonstrating the scaling of [FLAVA](https://flava-model.github.io/) to 10B params using techniques from PyTorch Distributed. FLAVA is a vision and language foundation model, available in [TorchMultimodal](https://github.com/facebookresearch/multimodal/tree/main/torchmultimodal/models/flava), which has shown competitive performance on both unimodal and multimodal benchmarks. We also give the relevant code pointers in this blog. The instructions for running an example script to scale FLAVA can be found [here](https://github.com/facebookresearch/multimodal/tree/main/examples/flava/native).\n\n## Scaling FLAVA Overview", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
+{"page_content": "FLAVA is a foundation multimodal model which consists of transformer based image and text encoders followed by a transformer-based multimodal fusion module. It is pretrained on both unimodal and multimodal data with a diverse set of losses. This includes masked language, image and multimodal modeling losses that require the model to reconstruct the original input from its context (self-supervised learning). It also uses image text matching loss over positive and negative examples of aligned image-text pairs as well as CLIP style contrastive loss. In addition to multimodal tasks (like image-text retrieval), FLAVA demonstrated competitive performance on unimodal benchmarks as well (GLUE tasks for NLP and image classification for vision).\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
+{"page_content": "The original FLAVA model has ~350M parameters and uses ViT-B16 configurations (from the [Vision Transformer paper](https://arxiv.org/pdf/2010.11929.pdf)) for image and text encoders. The multimodal fusion transformer follows the unimodal encoders but with half the number of layers. We explore increasing the size of each encoder to larger ViT variants. \n\nAnother aspect of scaling is adding the ability to increase the batch size. FLAVA makes use of contrastive loss over in-batch negatives, which typically benefits from large batch size (as studied [here](https://openreview.net/pdf?id=U2exBrf_SJh)). The largest training efficiency or throughput is also generally achieved when operating near maximum possible batch sizes as determined by the amount of GPU memory available (also see the experiments section). \n\nThe following table displays the different model configurations we experimented with. We also determine the maximum batch size that was able to fit in memory for each configuration in the experiments section.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
+{"page_content": "| Approx Model params | Hidden size | MLP size | Heads | Unimodal layers | Multimodal layers | Model size (fp32) |\n|-----------------------|---------------|----------|---------|-------------------|---------------------|---------------------|\n| 350M (original) | 768 | 3072 | 12 | 12 | 6 | 1.33GB |\n| 900M | 1024 | 4096 | 16 | 24 | 12 | 3.48GB |\n| 1.8B | 1280 | 5120 | 16 | 32 | 16 | 6.66GB |\n| 2.7B | 1408 | 6144 | 16 | 40 | 20 | 10.3GB |\n| 4.8B | 1664 | 8192 | 16 | 48 | 24 | 18.1GB |\n| 10B | 2048 | 10240 | 16 | 64 | 40 | 38GB |", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
+{"page_content": "## Optimization overview\n\nPyTorch offers several native techniques to efficiently scale models. In the following sections, we go over some of these techniques and show how they can be applied to scale up a FLAVA model to 10 billion parameters.\n\n## Distributed Data Parallel\n\nA common starting point for distributed training is data parallelism. Data parallelism replicates the model across each worker (GPU), and partitions the dataset across the workers. Different workers process different data partitions in parallel and synchronize their gradients (via all reduce) before model weights are updated. The figure below showcases the flow (forward, backward, and weight update steps) for processing a single example for data parallelism:\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
+{"page_content": "\n Source: https://engineering.fb.com/2021/07/15/open-source/fsdp/\n
\n\nPyTorch provides a native API, [DistributedDataParallel](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html) (DDP) to enable data parallelism which can be used as a module wrapper as showcased below. Please see PyTorch Distributed [documentation](https://pytorch.org/docs/stable/distributed.html#) for more details.\n\n```Python\nfrom torchmultimodal.models.flava.model import flava_model_for_pretraining\nimport torch\nimport torch.distributed as dist\n\nmodel = flava_model_for_pretraining().cuda()\n# Initialize PyTorch Distributed process groups\n# Please see https://pytorch.org/tutorials/intermediate/dist_tuto.html for details\ndist.init_process_group(backend=\u201dnccl\u201d)\n# Wrap model in DDP\nmodel = torch.nn.parallel.DistributedDataParallel(model, device_ids=[torch.cuda.current_device()])\n```\n\n## Fully Sharded Data Parallel", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
+{"page_content": "GPU memory usage of a training application can roughly be broken down into model inputs, intermediate activations (needed for gradient computation), model parameters, gradients, and optimizer states. Scaling a model will typically increase each of these elements. Scaling a model with DDP can eventually result in out-of-memory issues when a single GPU's memory becomes insufficient since it replicates the parameters, gradients, and optimizer states on all workers.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
+{"page_content": "To reduce this replication and save GPU memory, we can shard the model parameters, gradients, and optimizer states across all workers with each worker only managing a single shard. This technique was popularized by the [ZeRO-3](https://arxiv.org/abs/1910.02054) approach developed by Microsoft. A PyTorch-native implementation of this approach is available as [FullyShardedDataParallel](https://pytorch.org/docs/stable/fsdp.html) (FSDP) API, released as a beta feature in PyTorch 1.12. During a module\u2019s forward and backward passes, FSDP unshards the model parameters as needed for computation (using all-gather) and reshards them after computation. It synchronizes gradients using the reduce-scatter collective to ensure sharded gradients are globally averaged. The forward and backward pass flow of a model wrapped in FSDP are detailed below:\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
+{"page_content": "\n Source: https://engineering.fb.com/2021/07/15/open-source/fsdp/\n
\n\nTo use FSDP, the submodules of a model need to be wrapped with the API to control when specific submodules are sharded or unsharded. FSDP provides an auto-wrapping API (see the [auto_wrap_policy](https://pytorch.org/docs/stable/fsdp.html#torch.distributed.fsdp.FullyShardedDataParallel) argument) that can be used out of the box as well as several [wrapping policies](https://github.com/pytorch/pytorch/blob/master/torch/distributed/fsdp/wrap.py) and the ability to [write your own policy](https://github.com/pytorch/pytorch/blob/75c0e3a471c19b883feca15fd4ecfabedf746691/torch/distributed/fsdp/fully_sharded_data_parallel.py#L858).", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
+{"page_content": "The following example demonstrates wrapping the FLAVA model with FSDP. We specify the auto-wrapping policy as `transformer_auto_wrap_policy`. This will wrap individual transformer layers (`TransformerEncoderLayer`), the image transformer (`ImageTransformer`), text encoder (`BERTTextEncoder`) and multimodal encoder (`FLAVATransformerWithoutEmbeddings`) as individual FSDP units. This uses a recursive wrapping approach for efficient memory management. For example, after an individual transformer layer\u2019s forward or backward pass is finished, its parameters are discarded, freeing up memory thereby reducing peak memory usage.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
+{"page_content": "FSDP also provides a number of configurable options to tune the performance of applications. For example, in our use case, we illustrate the use of the new `limit_all_gathers` flag, which prevents all-gathering model parameters too early thereby alleviating memory pressure on the application. We encourage users to experiment with this flag which can potentially improve the performance of applications with high active memory usage.\n\n```Python\nimport torch\nfrom torch.distributed.fsdp import FullyShardedDataParallel as FSDP\nfrom torch.distributed.fsdp.wrap import transformer_auto_wrap_policy\nfrom torchmultimodal.models.flava.model import flava_model_for_pretraining\nfrom torchmultimodal.models.flava.text_encoder import BertTextEncoder\nfrom torchmultimodal.models.flava.image_encoder import ImageTransformer\nfrom torchmultimodal.models.flava.transformer import FLAVATransformerWithoutEmbeddings\nfrom torchmultimodal.modules.layers.transformer import TransformerEncoderLayer", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
+{"page_content": "model = flava_model_for_pretraining().cuda()\ndist.init_process_group(backend=\u201dnccl\u201d)\n\nmodel = FSDP(\n model,\n device_id=torch.cuda.current_device(),\n auto_wrap_policy=partial(\n transformer_auto_wrap_policy,\n transformer_layer_cls={\n TransformerEncoderLayer,\n ImageTransformer,\n BERTTextEncoder,\n FLAVATransformerWithoutEmbeddings\n },\n ),\n limit_all_gathers=True,\n )\n```\n\n## Activation Checkpointing", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
+{"page_content": "As discussed above, intermediate activations, model parameters, gradients, and optimizer states contribute to the overall GPU memory usage. FSDP can reduce memory consumption due to the latter three but does not reduce memory consumed by activations. Memory used by activations increases with increase in batch size or number of hidden layers. Activation checkpointing is a technique to decrease this memory usage by recomputing the activations during the backward pass instead of holding them in memory for a specific checkpointed module. For example, we observed ~4x reduction in the peak active memory after forward pass by applying activation checkpointing to the 2.7B parameter model.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
+{"page_content": "PyTorch offers a wrapper based activation checkpointing API. In particular, `checkpoint_wrapper` allows users to wrap an individual module with checkpointing, and `apply_activation_checkpointing` allows users to specify a policy with which to wrap modules within an overall module with checkpointing. Both these APIs can be applied to most models as they do not require any modifications to the model definition code. However, if more granular control over checkpointed segments, such as checkpointing specific functions within a module, is required, the functional `torch.utils.checkpoint` [API](https://pytorch.org/docs/stable/checkpoint.html) can be leveraged, although this requires modification to the model code. The application of the activation checkpointing wrapper to individual FLAVA transformer layers (denoted by `TransformerEncoderLayer`) is shown below. For a thorough description of activation checkpointing, please see the description in the [PyTorch documentation](https://pytorch.org/docs/stable/checkpoint.html).", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
+{"page_content": "```Python\nfrom torchmultimodal.models.flava.model import flava_model_for_pretraining\nfrom torch.distributed.algorithms._checkpoint.checkpoint_wrapper import apply_activation_checkpointing, checkpoint_wrapper, CheckpointImpl\nfrom torchmultimodal.modules.layers.transformer import TransformerEncoderLayer\n\nmodel = flava_model_for_pretraining()\ncheckpoint_tformer_layers_policy = lambda submodule: isinstance(submodule, TransformerEncoderLayer)\n\napply_activation_checkpointing(\n model,\n checkpoint_wrapper_fn=checkpoint_wrapper,\n check_fn=checkpoint_tformer_layers_policy,\n )\n```\nUsed together, wrapping FLAVA transformer layers with activation checkpointing and wrapping the overall model with FSDP as demonstrated above, we are able to scale FLAVA to 10B parameters.\n\n## Experiments", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
+{"page_content": "## Experiments\n\nWe conduct an empirical study about the impact of the different optimizations from the previous section on system performance. For all our experiments, we use a single node with 8 A100 40GB GPUs and run the pretraining for 1000 iterations. All runs also used PyTorch\u2019s [automatic mixed precision](https://pytorch.org/docs/stable/amp.html) with the bfloat16 data type. [TensorFloat32](https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices) format is also enabled to improve matmul performance on the A100. We define throughput as the average number of items (text or image) processed per second (we ignore the first 100 iterations while measuring throughput to account for warmup). We leave training to convergence and its impact on downstream task metrics as an area for future study.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
{"page_content": "Figure 1 plots the throughput for each model configuration and optimization, both with a local batch size of 8 and then with the maximum batch size possible on 1 node. Absence of a data point for a model variant for an optimization indicates that the model could not be trained on a single node.\n\nFigure 2 plots the maximum possible batch size per worker for each optimization. We observe a few things:", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "1. Scaling model size: DDP is only able to fit the 350M and 900M model on a node. With FSDP, due to memory savings, we are able to train ~3x bigger models compared to DDP (i.e. the 1.8B and 2.7B variants). Combining activation checkpointing (AC) with FSDP enables training even bigger models, on the order of ~10x compared to DDP (i.e. 4.8B and 10B variants)\n2. Throughput:", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "- For smaller model sizes, at a constant batch size of 8, the throughput for DDP is slightly higher than or equal to FSDP, explainable by the additional communication required by FSDP. It is lowest for FSDP and AC combined together. This is because AC re-runs checkpointed forward passes during the backwards pass, trading off additional computation for memory savings. However, in the case of the 2.7B model, FSDP + AC actually has higher throughput compared to FSDP alone. This is because the 2.7B model with", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "FSDP is operating close to the memory limit even at batch size 8 triggering CUDA malloc retries which tend to slow down training. AC helps with reducing the memory pressure and leads to no retries.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "- For DDP and FSDP + AC, the throughput increases with an increase in batch size for each model. For FSDP alone, this is true for smaller variants. However, with the 1.8B and 2.7B parameter models, we observe throughput degradation when increasing batch size. A potential reason for this, as noted above also, is that at the memory limit, PyTorch\u2019s CUDA memory management may have to retry cudaMalloc calls and/or run expensive defragmentation steps to find free memory blocks to handle the workload\u2019s memory", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "requirements which can result in training slowdown.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "- For larger models that can only be trained with FSDP (1.8B, 2.7B, 4.8B) the setting with highest throughput achieved is with FSDP + AC scaling to the maximum batch size. For 10B, we observe nearly equal throughput for smaller and maximum batch size. This might be counterintuitive as AC results in increased computation and maxing out batch size potentially leads to expensive defragmentation operations due to operating at CUDA memory limit. However, for these large models, the increase in batch size is", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "large enough to mask this overhead.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n\n Figure 1: Training throughput for different configurations\n
", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "\n - Batch size: FSDP alone enables slightly higher batch sizes compared to DDP. Using FSDP + AC enables ~3x batch size compared to DDP for the 350M param model and ~5.5x for 900M param model. Even for 10B, a max batch size of ~20 which is fairly decent. This essentially enables larger global batch size using fewer GPUs which is especially useful for contrastive learning tasks.
\n
", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n\n Figure 2: Max local batchsize possible for different configurations\n
", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "Conclusion", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "As the world moves towards multimodal foundation models, scaling model parameters and efficient training is becoming an area of focus. The PyTorch ecosystem aims to accelerate innovation in this field by providing different tools to the research community, both for training and scaling multimodal models. With FLAVA, we laid out an example of scaling a model for multimodal understanding. In the future, we plan to add support for other kinds of models like the ones for multimodal generation and demonstrate", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "their scaling factors. We also hope to automate many of these scaling and memory saving techniques (such as sharding and activation checkpointing) to reduce the amount of user experimentation needed to achieve the desired scale and maximum training throughput.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "References\n\n- [Introducing TorchMultimodal - a library for accelerating exploration in Multimodal AI](https://pytorch.org/blog/introducing-torchmultimodal/)\n- [FLAVA paper](https://deploy-preview-1186--pytorch-dot-org-preview.netlify.app/blog/introducing-torchmultimodal/)\n- [Introducing Pytorch FSDP](https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/)", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"A BetterTransformer for Fast Transformer Inference\"\nauthor: Michael Gschwind, Eric Han, Scott Wolchok, Rui Zhu, Christian Puhrsch\nfeatured-img: \"/assets/images/2022-7-12-a-better-transformer-for-fast-transformer-encoder-inference-3.png\"\n---", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "**tl;dr** Transformers achieve state-of-the-art performance for NLP, and are becoming popular for a myriad of other tasks. They are computationally expensive which has been a blocker to their widespread productionisation. Launching with PyTorch 1.12, BetterTransformer implements a backwards-compatible fast path of `torch.nn.TransformerEncoder` for Transformer Encoder Inference and does not require model authors to modify their models. BetterTransformer improvements can exceed 2x in speedup and throughput", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "for many common execution scenarios. To use BetterTransformer, [install](https://pytorch.org/get-started/locally/) PyTorch 1.12 and start using high-quality, high-performance Transformer models with the PyTorch API today.", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n\nDiagram of the Transformer Encoder Architecture (from \"Attention Is All You Need\"). During Inference, the entire module will execute as a single PyTorch-native function.\n
", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "In this blog post, we share the following topics \u2014 Performance Improvements, Backwards compatibility, and Taking advantage of the FastPath. Learn more about these topics below.", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "Performance Improvements", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "BetterTransformer launches with accelerated native implementations of MultiHeadAttention and TransformerEncoderLayer for CPUs and GPUs. These fast paths are integrated in the standard PyTorch Transformer APIs, and will accelerate [TransformerEncoder](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoder.html), [TransformerEncoderLayer](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html) and", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "[MultiHeadAttention](https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html) nn.modules. These new modules implement two types of optimizations: (1) fused kernels combine multiple individual operators normally used to implement Transformers to provide a more efficient implementation, and (2) take advantage of sparsity in the inputs to avoid performing unnecessary operations on padding tokens. Padding tokens frequently account for a large fraction of input batches in many Transformer", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "models used for Natural Language Processing.", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "Backwards compatibility", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "Advantageously, **no model changes are necessary to benefit from the performance boost offered by BetterTransformer.** To benefit from fast path execution, inputs and operating conditions must satisfy some access conditions (see below). While the internal implementation of Transformer APIs has changed, PyTorch 1.12 maintains strict compatibility with Transformer modules shipped in previous versions, enabling PyTorch users to use models created and trained with previous PyTorch releases while benefiting from", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "BetterTransformer improvements.", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "In addition to enabling the PyTorch nn.Modules, BetterTransformer provides improvements for PyTorch libraries. Performance benefits will become available through two different enablement paths:", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "1. Transparent acceleration: Current users of PyTorch nn.Modules such as [MultiHeadAttention](https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html) as well as higher-level Transformer components will benefit from the improved performance of the new nn.Modules automatically. An example of this is the [visual transformer (ViT)](https://arxiv.org/abs/2010.11929) implementation used in the torchvision library ([code", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "link](https://github.com/pytorch/vision/blob/87cde716b7f108f3db7b86047596ebfad1b88380/torchvision/models/vision_transformer.py#L103)).", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "2. Torchtext library acceleration: As part of this project, we have optimized Torchtext to build on the PyTorch core API to benefit from BetterTransformer enhancements while maintaining strict and transparent compatibility with previous library versions and models trained with previous Torchtext versions. Using PyTorch Transformers in Torchtext also ensures that Torchtext will benefit from expected future enhancements to the PyTorch Transformer implementation.", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "Taking advantage of the Fastpath\n\nBetterTransformer is a fastpath for the PyTorch Transformer API. The fastpath is a native, specialized implementation of key Transformer functions for CPU and GPU that applies to common Transformer use cases.", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
+{"page_content": "1. Scaling model size: DDP is only able to fit the 350M and 900M model on a node. With FSDP, due to memory savings, we are able to train ~3x bigger models compared to DDP (i.e. the 1.8B and 2.7B variants). Combining activation checkpointing (AC) with FSDP enables training even bigger models, on the order of ~10x compared to DDP (i.e. 4.8B and 10B variants)\n2. Throughput:\n - For smaller model sizes, at a constant batch size of 8, the throughput for DDP is slightly higher than or equal to FSDP, explainable by the additional communication required by FSDP. It is lowest for FSDP and AC combined together. This is because AC re-runs checkpointed forward passes during the backwards pass, trading off additional computation for memory savings. However, in the case of the 2.7B model, FSDP + AC actually has higher throughput compared to FSDP alone. This is because the 2.7B model with FSDP is operating close to the memory limit even at batch size 8 triggering CUDA malloc retries which tend to slow down training. AC helps with reducing the memory pressure and leads to no retries.\n - For DDP and FSDP + AC, the throughput increases with an increase in batch size for each model. For FSDP alone, this is true for smaller variants. However, with the 1.8B and 2.7B parameter models, we observe throughput degradation when increasing batch size. A potential reason for this, as noted above also, is that at the memory limit, PyTorch\u2019s CUDA memory management may have to retry cudaMalloc calls and/or run expensive defragmentation steps to find free memory blocks to handle the workload\u2019s memory requirements which can result in training slowdown.\n - For larger models that can only be trained with FSDP (1.8B, 2.7B, 4.8B) the setting with highest throughput achieved is with FSDP + AC scaling to the maximum batch size. For 10B, we observe nearly equal throughput for smaller and maximum batch size. This might be counterintuitive as AC results in increased computation and maxing out batch size potentially leads to expensive defragmentation operations due to operating at CUDA memory limit. However, for these large models, the increase in batch size is large enough to mask this overhead.", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
\n\n\n Figure 1: Training throughput for different configurations\n
\n\n\n - Batch size: FSDP alone enables slightly higher batch sizes compared to DDP. Using FSDP + AC enables ~3x batch size compared to DDP for the 350M param model and ~5.5x for 900M param model. Even for 10B, a max batch size of ~20 which is fairly decent. This essentially enables larger global batch size using fewer GPUs which is especially useful for contrastive learning tasks.
\n
\n\n\n
\n
\n\n\n Figure 2: Max local batchsize possible for different configurations\n
\n\n## Conclusion", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
+{"page_content": "## Conclusion\n\nAs the world moves towards multimodal foundation models, scaling model parameters and efficient training is becoming an area of focus. The PyTorch ecosystem aims to accelerate innovation in this field by providing different tools to the research community, both for training and scaling multimodal models. With FLAVA, we laid out an example of scaling a model for multimodal understanding. In the future, we plan to add support for other kinds of models like the ones for multimodal generation and demonstrate their scaling factors. We also hope to automate many of these scaling and memory saving techniques (such as sharding and activation checkpointing) to reduce the amount of user experimentation needed to achieve the desired scale and maximum training throughput.\n\n## References", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
+{"page_content": "## References\n\n- [Introducing TorchMultimodal - a library for accelerating exploration in Multimodal AI](https://pytorch.org/blog/introducing-torchmultimodal/)\n- [FLAVA paper](https://deploy-preview-1186--pytorch-dot-org-preview.netlify.app/blog/introducing-torchmultimodal/)\n- [Introducing Pytorch FSDP](https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/)", "metadata": {"source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"A BetterTransformer for Fast Transformer Inference\"\nauthor: Michael Gschwind, Eric Han, Scott Wolchok, Rui Zhu, Christian Puhrsch\nfeatured-img: \"/assets/images/2022-7-12-a-better-transformer-for-fast-transformer-encoder-inference-3.png\"\n---\n\n**tl;dr** Transformers achieve state-of-the-art performance for NLP, and are becoming popular for a myriad of other tasks. They are computationally expensive which has been a blocker to their widespread productionisation. Launching with PyTorch 1.12, BetterTransformer implements a backwards-compatible fast path of `torch.nn.TransformerEncoder` for Transformer Encoder Inference and does not require model authors to modify their models. BetterTransformer improvements can exceed 2x in speedup and throughput for many common execution scenarios. To use BetterTransformer, [install](https://pytorch.org/get-started/locally/) PyTorch 1.12 and start using high-quality, high-performance Transformer models with the PyTorch API today.", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
\n\n\nDiagram of the Transformer Encoder Architecture (from \"Attention Is All You Need\"). During Inference, the entire module will execute as a single PyTorch-native function.\n
\n\nIn this blog post, we share the following topics \u2014 Performance Improvements, Backwards compatibility, and Taking advantage of the FastPath. Learn more about these topics below. \n\n## Performance Improvements", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
+{"page_content": "BetterTransformer launches with accelerated native implementations of MultiHeadAttention and TransformerEncoderLayer for CPUs and GPUs. These fast paths are integrated in the standard PyTorch Transformer APIs, and will accelerate [TransformerEncoder](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoder.html), [TransformerEncoderLayer](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html) and [MultiHeadAttention](https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html) nn.modules. These new modules implement two types of optimizations: (1) fused kernels combine multiple individual operators normally used to implement Transformers to provide a more efficient implementation, and (2) take advantage of sparsity in the inputs to avoid performing unnecessary operations on padding tokens. Padding tokens frequently account for a large fraction of input batches in many Transformer models used for Natural Language Processing. \n\n## Backwards compatibility", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
+{"page_content": "Advantageously, **no model changes are necessary to benefit from the performance boost offered by BetterTransformer.** To benefit from fast path execution, inputs and operating conditions must satisfy some access conditions (see below). While the internal implementation of Transformer APIs has changed, PyTorch 1.12 maintains strict compatibility with Transformer modules shipped in previous versions, enabling PyTorch users to use models created and trained with previous PyTorch releases while benefiting from BetterTransformer improvements.\n\nIn addition to enabling the PyTorch nn.Modules, BetterTransformer provides improvements for PyTorch libraries. Performance benefits will become available through two different enablement paths:", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
+{"page_content": "1. Transparent acceleration: Current users of PyTorch nn.Modules such as [MultiHeadAttention](https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html) as well as higher-level Transformer components will benefit from the improved performance of the new nn.Modules automatically. An example of this is the [visual transformer (ViT)](https://arxiv.org/abs/2010.11929) implementation used in the torchvision library ([code link](https://github.com/pytorch/vision/blob/87cde716b7f108f3db7b86047596ebfad1b88380/torchvision/models/vision_transformer.py#L103)).", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
+{"page_content": "2. Torchtext library acceleration: As part of this project, we have optimized Torchtext to build on the PyTorch core API to benefit from BetterTransformer enhancements while maintaining strict and transparent compatibility with previous library versions and models trained with previous Torchtext versions. Using PyTorch Transformers in Torchtext also ensures that Torchtext will benefit from expected future enhancements to the PyTorch Transformer implementation.\n\n## Taking advantage of the Fastpath\n\nBetterTransformer is a fastpath for the PyTorch Transformer API. The fastpath is a native, specialized implementation of key Transformer functions for CPU and GPU that applies to common Transformer use cases.", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
{"page_content": "To take advantage of input sparsity (i.e. padding) in accelerating your model (see Figure 2), set the keyword argument `enable_nested_tensor=True` when instantiating a [TransformerEncoder](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoder.html) and pass in the `src_key_padding_mask` argument (which denotes padding tokens) during inference. This requires the padding mask to be contiguous, which is the typical case.", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "Currently, the BetterTransformer speedup only applies to transformer encoder models used in inference. To benefit from fastpath execution, models must be composed of any of the following components: [TransformerEncoder](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoder.html), [TransformerEncoderLayer](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html) or [MultiheadAttention](https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html) (MHA).", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "Fastpath execution is also subject to some criteria. Most importantly, the model must be executed in inference mode and operate on input tensors that do not collect gradient tape information (e.g., running with torch.no_grad). The full list of conditions can be found at these links for [nn.MultiHeadAttention](https://github.com/pytorch/pytorch/blob/29189d2ba8e583b2355cd0e9517a1ee742ba12cf/torch/nn/modules/activation.py#L1060) and", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "[nn.TransformerEncoder](https://github.com/pytorch/pytorch/blob/29189d2ba8e583b2355cd0e9517a1ee742ba12cf/torch/nn/modules/transformer.py#L206), respectively. If the criteria are not met, control flows to the legacy PyTorch 1.11 Transformer implementation which has the same API, but lacks the fastpath performance boost.", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "Other transformer models (such as decoder models) which use the PyTorch MultiheadAttention module will benefit from the BetterTransformer fastpath. Planned future work is to expand the end-to-end BetterTransformer fastpath to models based on [TransformerDecoder](https://pytorch.org/docs/stable/generated/torch.nn.TransformerDecoder.html) to support popular seq2seq and decoder-only (e.g., [OPT](https://ai.facebook.com/blog/democratizing-access-to-large-scale-language-models-with-opt-175b/)) model", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "architectures, and to training.", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "Speedups\n\nThe following graphs show the performance achieved for the [BERT](https://arxiv.org/abs/1810.04805)-base model with small and large-scale inputs:\n\n\n
\n
\n\n\nFigure 1: PyTorch 1.12 Improvements with BetterTransformer fastpath execution\n
", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
+{"page_content": "Currently, the BetterTransformer speedup only applies to transformer encoder models used in inference. To benefit from fastpath execution, models must be composed of any of the following components: [TransformerEncoder](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoder.html), [TransformerEncoderLayer](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html) or [MultiheadAttention](https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html) (MHA). Fastpath execution is also subject to some criteria. Most importantly, the model must be executed in inference mode and operate on input tensors that do not collect gradient tape information (e.g., running with torch.no_grad). The full list of conditions can be found at these links for [nn.MultiHeadAttention](https://github.com/pytorch/pytorch/blob/29189d2ba8e583b2355cd0e9517a1ee742ba12cf/torch/nn/modules/activation.py#L1060) and [nn.TransformerEncoder](https://github.com/pytorch/pytorch/blob/29189d2ba8e583b2355cd0e9517a1ee742ba12cf/torch/nn/modules/transformer.py#L206), respectively. If the criteria are not met, control flows to the legacy PyTorch 1.11 Transformer implementation which has the same API, but lacks the fastpath performance boost.", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
+{"page_content": "Other transformer models (such as decoder models) which use the PyTorch MultiheadAttention module will benefit from the BetterTransformer fastpath. Planned future work is to expand the end-to-end BetterTransformer fastpath to models based on [TransformerDecoder](https://pytorch.org/docs/stable/generated/torch.nn.TransformerDecoder.html) to support popular seq2seq and decoder-only (e.g., [OPT](https://ai.facebook.com/blog/democratizing-access-to-large-scale-language-models-with-opt-175b/)) model architectures, and to training.\n\n## Speedups\n\nThe following graphs show the performance achieved for the [BERT](https://arxiv.org/abs/1810.04805)-base model with small and large-scale inputs:\n\n\n
\n
\n\n\nFigure 1: PyTorch 1.12 Improvements with BetterTransformer fastpath execution\n
", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
{"page_content": "\n
\n
\n\n\nFigure 2: PyTorch 1.12 Improvements with BetterTransformer fastpath execution
\nwith sparsity optimization enabled by enable_nested_tensor=True\n
", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "BetterTransformer includes two types of optimization: (1) fused kernels implementing multiple operations more efficiently in a single kernel, and (2) exploiting sparsity by avoiding unnecessary processing on padding tokens. Enhanced performance for small input sizes benefits primarily from the fused kernel implementations, and shows a constant performance improvement regardless of padding amount. While large inputs still benefit from fused kernels, the computation heavy processing limits the benefits that", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "may be obtained by the fused kernels as baseline performance is already closer to the theoretical peak. However, as we increase the amount of padding, performance increases dramatically as increasingly large amounts of computation can be avoided by exploiting the sparsity introduced by padding in NLP workloads.", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "Future Work\n\nAs part of our ongoing work on PyTorch BetterTransformer, we are working on extending BetterTransformer improvements to Transformer Decoders. We aim to expand beyond inference to training as well.", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "We are partnering to enable BetterTransformer on additional libraries such as FairSeq, MetaSeq, and HuggingFace to benefit all Transformer-based PyTorch models. We\u2019ll provide future updates on the progress of BetterTransformer accelerations for the larger PyTorch ecosystem as part of this blog series.", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "Acknowledgements: The authors would like to thank Lin Qiao, Ajit Mathews, Andrew Tulloch, Dmytro Dzhulgakov, Natalia Gimelshein, Emad El-Haraty, Mark Saroufim, Adnan Aziz, Geeta Chauhan, and Hamid Shojanazeri for their support, contributions and many helpful suggestions throughout the course of this project, and in the preparation of this blog.", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"Experience the power of PyTorch 2.0 on AMD Solutions\"\nauthor: AMD\n---", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch 2.0 represents a significant step forward for the PyTorch machine learning framework. The stable release of PyTorch 2.0 brings new features that unlock even higher performance, while remaining backward compatible with prior releases and retaining the Pythonic focus which has helped to make PyTorch so enthusiastically adopted by the AI/ML community. AMD has long been a strong proponent of PyTorch, and we are delighted that the PyTorch 2.0 stable release includes support for AMD Instinct\u2122 and Radeon\u2122", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
-{"page_content": "GPUs that are supported by the ROCm\u2122 software platform.", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
-{"page_content": "With the stable PyTorch 2.0 release, PyTorch 2.0 introduces torch.compile as a beta feature underpinned by TorchInductor with support for AMD Instinct and Radeon GPUs through OpenAI Triton deep learning compiler. Through TorchInductor, developers can now generate low level kernels using Triton that are portable and performant to hand-written kernels on native hardware centric kernel programming models.", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
-{"page_content": "OpenAI Triton is a language and compiler for blocked algorithms, which aims to provide an abstraction layer between CUDA/HIP and Torch at which developers can write efficient kernels more productively. We have written a new backend which interfaces Triton's custom MLIR dialects with our ROCm compiler stack.", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
-{"page_content": "Triton can automatically optimize kernels generated by machine learning compilers such as TorchInductor for multiple AI accelerators including AMD Instinct GPU accelerator by leveraging hardware-specific features of the AMD CDNA\u2122 GPU architecture. This makes it easy for developers and users to switch seamlessly from any HW to AMD Instinct GPU accelerators and get great out of the box performance.", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
-{"page_content": "In addition, compilers like Triton can also enable developers to use high-level programming languages, such as Python, to write machine learning code that can be efficiently compiled and executed on specialized hardware. This can help greatly improve the productivity of machine learning developers, as they can focus on the algorithmic aspects of their models and rely on the compiler to generate efficient code.", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
-{"page_content": "By design, PyTorch 2.0 is backward compatible to earlier PyTorch releases. This holds true for the ROCm build of PyTorch 2.0 as well. Developers using PyTorch with AMD GPUs can migrate to PyTorch 2.0 with the confidence that their existing code will continue to work without any required changes, so there is no penalty to access the improvements that come with this release. On the other hand, using PyTorch 2.0 and TorchInductor can result in significant performance improvement over the default eager-mode as", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
-{"page_content": "shown below.", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
-{"page_content": "The initial results using AMD Instinct MI250 GPUs already shows strong performance improvement with minimal optimization on TorchInductor compared to the default eager-mode. We see an average performance increase of up to 1.54X on 44 out of the 45 models on HuggingFace benchmarks suite with CamemBert, DistillGPT2 and T5Small being a few of the standout models with up to 1.5X or more performance improvement over eager-mode. We are looking forward to continued engagement with members of the PyTorch team at", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
-{"page_content": "Meta to enable further optimization on ROCm software stack and the additional performance improvement for future PyTorch releases.", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
-{"page_content": "{:style=\"max-height:800px; width:100%\"} \n\nImage 1: AMD MI250 GPU performance improvement for TorchInductor vs eager-mode using HuggingFace MI200-89.", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch 2.0 follows the same set of install options as before to build and install for supporting AMD GPUs. These include an installable Python package hosted at [pytorch.org](https://pytorch.org/), AMD\u2019s public PyTorch docker image, and of course the option to build from source using the upstream PyTorch repository. As with PyTorch builds for other platforms, the specific command line to be run for pip-based install is provided by the configurator at", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
-{"page_content": "[https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/).", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
-{"page_content": "The GPUs supported by the ROCm software platform which forms the basis for PyTorch support on AMD GPUs are documented at [https://docs.amd.com/bundle/Hardware_and_Software_Reference_Guide/page/Hardware_and_Software_Support.html](https://docs.amd.com/bundle/Hardware_and_Software_Reference_Guide/page/Hardware_and_Software_Support.html)", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
-{"page_content": "Conclusion", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch 2.0 represents a major step in continuing to broaden support for ML developers by increasing performance while maintaining a simple, Pythonic interface. This performance uplift is made possible in large part by the new TorchInductor infrastructure, which in turn harnesses the Triton ML programming language and just-in-time compiler. AMD\u2019s support for these technologies allows users to realize the full promise of the new PyTorch architecture. Our GPU support in PyTorch 2.0 is just one manifestation", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
-{"page_content": "of a larger vision around AI and machine learning. AI/ML plays an important role in multiple AMD product lines, including Instinct and Radeon GPUs, Alveo\u2122 data center accelerators, and both Ryzen\u2122 and EPYC processors. These hardware and software initiatives are all part of AMD\u2019s Pervasive AI vision, and we look forward to addressing the many new challenges and opportunities of this dynamic space.", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
-{"page_content": "MI200-89 \u2013 PyTorch Inductor mode HuggingFace Transformers training speedup, running the standard PyTorch 2.0 test suite, over PyTorch eager-mode comparison based on AMD internal testing on a single GCD as of 3/10/2023 using a 2P AMD EPYC\u2122 7763 production server with 4x AMD Instinct\u2122 MI250 (128GB HBM2e) 560W GPUs with Infinity Fabric\u2122 technology; host ROCm\u2122 5.3, guest ROCm\u2122 5.4.4, PyTorch 2.0.0, Triton 2.0. Server manufacturers may vary configurations, yielding different results. Performance may vary based", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
-{"page_content": "on factors including use of latest drivers and optimizations.", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
-{"page_content": "\u00a9 2023 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, AMD CDNA, AMD Instinct, EPYC, Radeon, ROCm, Ryzen, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other product names used in this publication are for identification purposes only and may be trademarks of their respective owners.", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'Announcing PyTorch Developer Day 2021'\nauthor: Team PyTorch\nfeatured-img: 'assets/images/ptdevday21.gif'\n---\n\nWe are excited to announce PyTorch Developer Day (#PTD2), taking place virtually from December 1 & 2, 2021. Developer Day is designed for developers and users to discuss core technical developments, ideas, and roadmaps. \n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-developer-day-2021/", "category": "pytorch blogs"}}
-{"page_content": "Event Details \n**Technical Talks Live Stream - December 1, 2021**\n\nJoin us for technical talks on a variety of topics, including updates to the core framework, new tools and libraries to support development across a variety of domains, responsible AI and industry use cases. All talks will take place on December 1 and will be live streamed on PyTorch channels.", "metadata": {"source": "https://pytorch.org/blog/pytorch-developer-day-2021/", "category": "pytorch blogs"}}
-{"page_content": "Stay up to date by following us on our social channels: [Twitter](https://twitter.com/PyTorch), [Facebook](https://facebook.com/PyTorch), or [LinkedIn](https://www.linkedin.com/company/pytorch).\n\n**Poster Exhibition & Networking - December 2, 2021**", "metadata": {"source": "https://pytorch.org/blog/pytorch-developer-day-2021/", "category": "pytorch blogs"}}
-{"page_content": "On the second day, we\u2019ll be hosting an online poster exhibition on Gather.Town. There will be opportunities to meet the authors and learn more about their PyTorch projects as well as network with the community. This poster and networking event is limited to people composed of PyTorch maintainers and contributors, long-time stakeholders and experts in areas relevant to PyTorch\u2019s future. Conversations from the networking event will strongly shape the future of PyTorch. As such, invitations are required to", "metadata": {"source": "https://pytorch.org/blog/pytorch-developer-day-2021/", "category": "pytorch blogs"}}
-{"page_content": "attend the networking event.", "metadata": {"source": "https://pytorch.org/blog/pytorch-developer-day-2021/", "category": "pytorch blogs"}}
-{"page_content": "Apply for an invitation to the networking event by clicking [here](https://pytorchdeveloperday.fbreg.com/).", "metadata": {"source": "https://pytorch.org/blog/pytorch-developer-day-2021/", "category": "pytorch blogs"}}
-{"page_content": "Call for Content Now Open\n\nSubmit your poster abstracts today! Please send us the title and brief summary of your project, tools and libraries that could benefit PyTorch researchers in academia and industry, application developers, and ML engineers for consideration. The focus must be on academic papers, machine learning research, or open-source projects related to PyTorch development, Responsible AI or Mobile. Please no sales pitches. **Deadline for submission is September 24, 2021**.", "metadata": {"source": "https://pytorch.org/blog/pytorch-developer-day-2021/", "category": "pytorch blogs"}}
-{"page_content": "You can submit your poster abstract during your application & registration process [here](https://pytorchdeveloperday.fbreg.com/apply).\n\nVisit the [event website](https://pytorchdeveloperday.fbreg.com/) for more information and we look forward to having you at PyTorch Developer Day. For any questions about the event, contact [pytorch@fbreg.com](mailto:pytorch@fbreg.com).", "metadata": {"source": "https://pytorch.org/blog/pytorch-developer-day-2021/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'Efficient PyTorch: Tensor Memory Format Matters'\nauthor: 'Dhruv Matani, Suraj Subramanian'\nfeatured-img: ''\n---\n\nEnsuring the right memory format for your inputs can significantly impact the running time of your PyTorch vision models. When in doubt, choose a Channels Last memory format.", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "When dealing with vision models in PyTorch that accept multimedia (for example image Tensorts) as input, the Tensor\u2019s memory format can significantly impact **the inference execution speed of your model on mobile platforms when using the CPU backend along with XNNPACK**. This holds true for training and inference on server platforms as well, but latency is particularly critical for mobile devices and users.", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "Outline of this article\n1. Deep Dive into matrix storage/memory representation in C++. Introduction to [Row and Column major order](https://en.wikipedia.org/wiki/Row-_and_column-major_order).\n2. Impact of looping over a matrix in the same or different order as the storage representation, along with an example.\n3. Introduction to Cachegrind; a tool to inspect the cache friendliness of your code.\n4. Memory formats supported by PyTorch Operators.", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "5. Best practices example to ensure efficient model execution with XNNPACK optimizations", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "Matrix Storage Representation in C++\n\nImages are fed into PyTorch ML models as multi-dimensional Tensors. These Tensors have specific memory formats. To understand this concept better, let\u2019s take a look at how a 2-d matrix may be stored in memory.", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "Broadly speaking, there are 2 main ways of efficiently storing multi-dimensional data in memory.\n1. **Row Major Order:** In this format, the matrix is stored in row order, with each row stored before the next row in memory. I.e. row N comes before row N+1.\n2. **Column Major Order:** In this format, the matrix is stored in column-order, with each column stored before the next column in memory. I.e. column N comes before column N+1.\n\nYou can see the differences graphically below.", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\nC++ stores multi-dimensional data in row-major format.\n
", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "Efficiently accessing elements of a 2d matrix\n\nSimilar to the storage format, there are 2 ways to access data in a 2d matrix.\n\n1. **Loop Over Rows first:** All elements of a row are processed before any element of the next row.\n2. **Loop Over Columns first:** All elements of a column are processed before any element of the next column.", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "For maximum efficiency, one should always access data in the same format in which it is stored. I.e. if the data is stored in row-major order, then one should try to access it in that order.\n\nThe code below (main.cpp) shows [2 ways](https://stackoverflow.com/questions/9936132/why-does-the-order-of-the-loops-affect-performance-when-iterating-over-a-2d-arra) of accessing all the elements of a 2d 4000x4000 matrix.\n\n```python\n#include \n#include ", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "// loop1 accesses data in matrix 'a' in row major order,\n// since i is the outer loop variable, and j is the\n// inner loop variable.\nint loop1(int a[4000][4000]) {\n int s = 0;\n for (int i = 0; i < 4000; ++i) {\n for (int j = 0; j < 4000; ++j) {\n s += a[i][j];\n }\n }\n return s;\n}", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "// loop2 accesses data in matrix 'a' in column major order\n// since j is the outer loop variable, and i is the\n// inner loop variable.\nint loop2(int a[4000][4000]) {\n int s = 0;\n for (int j = 0; j < 4000; ++j) {\n for (int i = 0; i < 4000; ++i) {\n s += a[i][j];\n }\n }\n return s;\n}\n\nint main() {\n static int a[4000][4000] = {0};\n for (int i = 0; i < 100; ++i) {\n int x = rand() % 4000;\n int y = rand() % 4000;\n a[x][y] = rand() % 1000;\n }", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "auto start = std::chrono::high_resolution_clock::now();\n auto end = start;\n int s = 0;\n\n#if defined RUN_LOOP1\n start = std::chrono::high_resolution_clock::now();\n\n s = 0;\n for (int i = 0; i < 10; ++i) {\n s += loop1(a);\n s = s % 100;\n }\n end = std::chrono::high_resolution_clock::now();\n\n std::cout << \"s = \" << s << std::endl;\n std::cout << \"Time for loop1: \"\n << std::chrono::duration(end - start).count()\n << \"ms\" << std::endl;\n#endif", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "#if defined RUN_LOOP2\n start = std::chrono::high_resolution_clock::now();\n s = 0;\n for (int i = 0; i < 10; ++i) {\n s += loop2(a);\n s = s % 100;\n }\n end = std::chrono::high_resolution_clock::now();\n\n std::cout << \"s = \" << s << std::endl;\n std::cout << \"Time for loop2: \"\n << std::chrono::duration(end - start).count()\n << \"ms\" << std::endl;\n#endif\n}\n\n\nLet\u2019s build and run this program and see what it prints.\n\ng++ -O2 main.cpp -DRUN_LOOP1 -DRUN_LOOP2\n./a.out\n\n\nPrints the following:", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "s = 70\nTime for loop1: 77.0687ms\ns = 70\nTime for loop2: 1219.49ms", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "loop1() is **15x faster** than loop2(). Why is that? Let\u2019s find out below!", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "Measure cache misses using Cachegrind\n\n[Cachegrind](https://courses.cs.washington.edu/courses/cse326/05wi/valgrind-doc/cg_main.html) is a cache profiling tool used to see how many I1 (first level instruction), D1 (first level data), and LL (last level) cache misses your program caused.\n\nLet\u2019s build our program with just loop1() and just loop2() to see how cache friendly each of these functions is.", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "Build and run/profile just loop1()\n\n```python\ng++ -O2 main.cpp -DRUN_LOOP1\nvalgrind --tool=cachegrind ./a.out\n```", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "Prints:", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "```python\n==3299700==\n==3299700== I refs: 643,156,721\n==3299700== I1 misses: 2,077\n==3299700== LLi misses: 2,021\n==3299700== I1 miss rate: 0.00%\n==3299700== LLi miss rate: 0.00%\n==3299700==\n==3299700== D refs: 160,952,192 (160,695,444 rd + 256,748 wr)\n==3299700== D1 misses: 10,021,300 ( 10,018,723 rd + 2,577 wr)\n==3299700== LLd misses: 10,010,916 ( 10,009,147 rd + 1,769 wr)", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "==3299700== D1 miss rate: 6.2% ( 6.2% + 1.0% )\n==3299700== LLd miss rate: 6.2% ( 6.2% + 0.7% )\n==3299700==\n==3299700== LL refs: 10,023,377 ( 10,020,800 rd + 2,577 wr)\n==3299700== LL misses: 10,012,937 ( 10,011,168 rd + 1,769 wr)\n==3299700== LL miss rate: 1.2% ( 1.2% + 0.7% )\n```", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "Build and run/profile just loop2()\n\n\n```python\ng++ -O2 main.cpp -DRUN_LOOP2\nvalgrind --tool=cachegrind ./a.out\n```", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "Prints:", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "```python\n==3300389==\n==3300389== I refs: 643,156,726\n==3300389== I1 misses: 2,075\n==3300389== LLi misses: 2,018\n==3300389== I1 miss rate: 0.00%\n==3300389== LLi miss rate: 0.00%\n==3300389==\n==3300389== D refs: 160,952,196 (160,695,447 rd + 256,749 wr)\n==3300389== D1 misses: 160,021,290 (160,018,713 rd + 2,577 wr)\n==3300389== LLd misses: 10,014,907 ( 10,013,138 rd + 1,769 wr)", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "==3300389== D1 miss rate: 99.4% ( 99.6% + 1.0% )\n==3300389== LLd miss rate: 6.2% ( 6.2% + 0.7% )\n==3300389==\n==3300389== LL refs: 160,023,365 (160,020,788 rd + 2,577 wr)\n==3300389== LL misses: 10,016,925 ( 10,015,156 rd + 1,769 wr)\n==3300389== LL miss rate: 1.2% ( 1.2% + 0.7% )", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "The main differences between the 2 runs are:\n1. **D1 misses:** 10M v/s 160M\n2. **D1 miss rate:** 6.2% v/s 99.4%\n\nAs you can see, `loop2()` causes many many more (**~16x more**) L1 data cache misses than loop1(). This is why `loop1()` is ~15x faster than loop2().", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "Memory Formats supported by PyTorch Operators\n\nWhile PyTorch operators expect all tensors to be in [Channels First (NCHW) dimension format](https://discuss.pytorch.org/t/why-does-pytorch-prefer-using-nchw/83637/4), PyTorch operators support 3 output [memory formats](https://github.com/pytorch/pytorch/blob/master/c10/core/MemoryFormat.h).", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "1. **Contiguous:** Tensor memory is in the same order as the tensor\u2019s dimensions.\n2. **ChannelsLast:** Irrespective of the dimension order, the 2d (image) tensor is laid out as an HWC or [NHWC](https://oneapi-src.github.io/oneDNN/dev_guide_understanding_memory_formats.html) (N: batch, H: height, W: width, C: channels) tensor in memory. The dimensions could be permuted in any order.", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "3. **ChannelsLast3d:** For 3d tensors (video tensors), the memory is laid out in THWC (Time, Height, Width, Channels) or NTHWC (N: batch, T: time, H: height, W: width, C: channels) format. The dimensions could be permuted in any order.", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "The reason that ChannelsLast is preferred for vision models is because [XNNPACK](https://github.com/google/XNNPACK) (kernel acceleration library) used by PyTorch expects all inputs to be in **Channels Last** format, so if the input to the model isn\u2019t channels last, then it must first be converted to channels last, which is an additional operation.", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "Additionally, most PyTorch operators preserve the input tensor\u2019s memory format, so if the input is Channels First, then the operator needs to first convert to Channels Last, then perform the operation, and then convert back to Channels First.", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "When you combine it with the fact that accelerated operators work better with a channels last memory format, you\u2019ll notice that having the operator return back a channels-last memory format is better for subsequent operator calls or you\u2019ll end up having every operator convert to channels-last (should it be more efficient for that specific operator).\n\nFrom the XNNPACK home page:\n\n> \u201cAll operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension\".", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch Best Practice\n\nThe best way to get the most performance from your PyTorch vision models is to ensure that your input tensor is in a **Channels Last** [memory format](https://pytorch.org/tutorials/intermediate/memory_format_tutorial.html) before it is fed into the model.", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "You can get even more speedups by optimizing your model to use the XNNPACK backend (by simply calling `optimize_for_mobile()` on your torchscripted model). Note that XNNPACK models will run slower if the inputs are contiguous, so definitely make sure it is in Channels-Last format.", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "Working example showing speedup\n\nRun this example on [Google Colab](https://colab.research.google.com/gist/suraj813/ad9aebcbffbdd6d02b23ca7231130a30/channels-last-with-xnnpack.ipynb#scrollTo=xvJN73YWXgDF) - note that runtimes on colab CPUs might not reflect accurate performance; it is recommended to run this code on your local machine.\n\n```python\nimport torch\nfrom torch.utils.mobile_optimizer import optimize_for_mobile\nimport torch.backends.xnnpack\nimport time", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "print(\"XNNPACK is enabled: \", torch.backends.xnnpack.enabled, \"\\n\")\n\nN, C, H, W = 1, 3, 200, 200\nx = torch.rand(N, C, H, W)\nprint(\"Contiguous shape: \", x.shape)\nprint(\"Contiguous stride: \", x.stride())\nprint()\n\nxcl = x.to(memory_format=torch.channels_last)\nprint(\"Channels-Last shape: \", xcl.shape)\nprint(\"Channels-Last stride: \", xcl.stride())", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "Outputs:\n \n# XNNPACK is enabled: True\n \n# Contiguous shape: torch.Size([1, 3, 200, 200])\n# Contiguous stride: (120000, 40000, 200, 1)\n \n# Channels-Last shape: torch.Size([1, 3, 200, 200])\n# Channels-Last stride: (120000, 1, 600, 3)", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "The input shape stays the same for contiguous and channels-last formats. Internally however, the tensor's layout has changed as you can see in the strides. Now, the number of jumps required to go across channels is only 1 (instead of 40000 in the contiguous tensor).\nThis better data locality means convolution layers can access all the channels for a given pixel much faster. Let's see now how the memory format affects runtime:\n\n```python\nfrom torchvision.models import resnet34, resnet50, resnet101", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "m = resnet34(pretrained=False)\n# m = resnet50(pretrained=False)\n# m = resnet101(pretrained=False)\n\ndef get_optimized_model(mm):\n mm = mm.eval()\n scripted = torch.jit.script(mm)\n optimized = optimize_for_mobile(scripted) # explicitly call the xnnpack rewrite \n return scripted, optimized\n\n\ndef compare_contiguous_CL(mm):\n # inference on contiguous\n start = time.perf_counter()\n for i in range(20):\n mm(x)\n end = time.perf_counter()\n print(\"Contiguous: \", end-start)", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "# inference on channels-last\n start = time.perf_counter()\n for i in range(20):\n mm(xcl)\n end = time.perf_counter()\n print(\"Channels-Last: \", end-start)\n\nwith torch.inference_mode():\n scripted, optimized = get_optimized_model(m)\n\n print(\"Runtimes for torchscripted model: \")\n compare_contiguous_CL(scripted.eval())\n print()\n print(\"Runtimes for mobile-optimized model: \")\n compare_contiguous_CL(optimized.eval())", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "Outputs (on an Intel Core i9 CPU):\n \n# Runtimes for torchscripted model:\n# Contiguous: 1.6711160129999598\n# Channels-Last: 1.6678222839999535\n \n# Runtimes for mobile-optimized model:\n# Contiguous: 0.5712863490000473\n# Channels-Last: 0.46113000699995155\n\n```\n\n## Conclusion\n\nThe Memory Layout of an input tensor can significantly impact a model\u2019s running time. For Vision Models, prefer a **Channels Last** memory format to get the most out of your PyTorch models.", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "References", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "- [Row/Column Major matrix storage order](https://en.wikipedia.org/wiki/Row-_and_column-major_order)\n- [Loop order impact on performance](https://stackoverflow.com/questions/9936132/why-does-the-order-of-the-loops-affect-performance-when-iterating-over-a-2d-arra)\n- [Cachegrind: a cache-miss profiler](https://courses.cs.washington.edu/courses/cse326/05wi/valgrind-doc/cg_main.html)\n- [NHWC format explained](https://oneapi-src.github.io/oneDNN/dev_guide_understanding_memory_formats.html)", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "- [Why does PyTorch prefer NCHW?](https://discuss.pytorch.org/t/why-does-pytorch-prefer-using-nchw/83637/4)\n- [XNNPACK](https://github.com/google/XNNPACK)\n- [PyTorch memory format tutorial](https://pytorch.org/tutorials/intermediate/memory_format_tutorial.html)\n- [Supported operators](https://github.com/pytorch/pytorch/wiki/Operators-with-Channels-Last-support)", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch framework for cryptographically secure random number generation, torchcsprng, now available'\nauthor: Team PyTorch\n---", "metadata": {"source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"}}
-{"page_content": "One of the key components of modern cryptography is the pseudorandom number generator. Katz and Lindell stated, \"The use of badly designed or inappropriate random number generators can often leave a good cryptosystem vulnerable to attack. Particular care must be taken to use a random number generator that is designed for cryptographic use, rather than a 'general-purpose' random number generator which may be fine for some applications but not ones that are required to be cryptographically secure.\"[1]", "metadata": {"source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"}}
-{"page_content": "Additionally, most pseudorandom number generators scale poorly to massively parallel high-performance computation because of their sequential nature. Others don\u2019t satisfy cryptographically secure properties.", "metadata": {"source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"}}
-{"page_content": "[torchcsprng](https://github.com/pytorch/csprng) is a PyTorch [C++/CUDA extension](https://pytorch.org/tutorials/advanced/cpp_extension.html) that provides [cryptographically secure pseudorandom number generators](https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator) for PyTorch.", "metadata": {"source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"}}
-{"page_content": "torchcsprng overview", "metadata": {"source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"}}
-{"page_content": "Historically, PyTorch had only two pseudorandom number generator implementations: Mersenne Twister for CPU and Nvidia\u2019s cuRAND Philox for CUDA. Despite good performance properties, neither of them are suitable for cryptographic applications. Over the course of the past several months, the PyTorch team developed the torchcsprng extension API. Based on PyTorch dispatch mechanism and operator registration, it allows the users to extend c10::GeneratorImpl and implement their own custom pseudorandom number", "metadata": {"source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"}}
-{"page_content": "generator.", "metadata": {"source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"}}
-{"page_content": "torchcsprng generates a random 128-bit key on the CPU using one of its generators and then runs AES128 in [CTR mode](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Counter_(CTR)) either on CPU or GPU using CUDA. This then generates a random 128-bit state and applies a transformation function to map it to target tensor values. This approach is based on [Parallel Random Numbers: As Easy as 1, 2, 3 (John K. Salmon, Mark A. Moraes, Ron O. Dror, and David E. Shaw, D. E. Shaw", "metadata": {"source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"}}
-{"page_content": "Research)](http://www.thesalmons.org/john/random123/papers/random123sc11.pdf). It makes torchcsprng both crypto-secure and parallel on both CPU and CUDA.", "metadata": {"source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
\n\nSince torchcsprng is a PyTorch extension, it is available on the platforms where PyTorch is available (support for Windows-CUDA will be available in the coming months).", "metadata": {"source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"}}
-{"page_content": "Using torchcsprng\n\nThe torchcsprng API is very simple to use and is fully compatible with the PyTorch random infrastructure:\n\n**Step 1: Install via binary distribution**\n\nAnaconda:\n\n```python\nconda install torchcsprng -c pytorch\n ```\n\npip:\n\n```python\npip install torchcsprng\n ```\n\n**Step 2: import packages as usual but add csprng**\n\n```python\nimport torch\nimport torchcsprng as csprng", "metadata": {"source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"}}
-{"page_content": "**Step 3: Create a cryptographically secure pseudorandom number generator from /dev/urandom:**\n\n```python\nurandom_gen = csprng.create_random_device_generator('/dev/urandom')\n ```\n \nand simply use it with the existing PyTorch methods:\n\n```python\ntorch.randn(10, device='cpu', generator=urandom_gen)\n ```\n\n**Step 4: Test with Cuda**\n\nOne of the advantages of torchcsprng generators is that they can be used with both CPU and CUDA tensors:\n\n```python\ntorch.randn(10, device='cuda', generator=urandom_gen)", "metadata": {"source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"}}
-{"page_content": "Another advantage of torchcsprng generators is that they are parallel on CPU unlike the default PyTorch CPU generator.", "metadata": {"source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"}}
-{"page_content": "Getting Started\n\nThe easiest way to get started with torchcsprng is by visiting the [GitHub page](https://github.com/pytorch/csprng) where you can find installation and build instructions, and more how-to examples. \n\nCheers,\n\nThe PyTorch Team\n\n[1] [Introduction to Modern Cryptography: Principles and Protocols (Chapman & Hall/CRC Cryptography and Network Security Series)](https://www.amazon.com/Introduction-Modern-Cryptography-Principles-Protocols/dp/1584885513) by Jonathan Katz and Yehuda Lindell", "metadata": {"source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"Optimizing Production PyTorch Models\u2019 Performance with Graph Transformations\"\nauthor: Jade Nie, CK Luk, Xiaodong Wang, Jackie (Jiaqi) Xu\nfeatured-img: \"assets/images/blog1-3b.png\"\n---", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "1. Introduction", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch supports two execution modes [1]: eager mode and graph mode. In eager mode, operators in a model are immediately executed as they are encountered. In contrast, in graph mode, operators are first synthesized into a graph, which will then be compiled and executed as a whole. Eager mode is easier to use, more suitable for ML researchers, and hence is the default mode of execution. On the other hand, graph mode typically delivers higher performance and hence is heavily used in production.", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "Specifically, graph mode enables operator fusion [2], wherein one operator is merged with another to reduce/localize memory reads as well as total kernel launch overhead. Fusion can be horizontal\u2014taking a single operation (e.g., BatchNorm) that is independently applied to many operands and merging those operands into an array; and vertical\u2014merging a kernel with another kernel that consumes the output of the first kernel (e.g., Convolution followed by ReLU).", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "Torch.FX [3, 4] (abbreviated as FX) is a publicly available toolkit as part of the PyTorch package that supports graph mode execution. In particular, it (1) captures the graph from a PyTorch program and (2) allows developers to write transformations on the captured graph. It is used inside Meta to optimize the training throughput of production models. By introducing a number of FX-based optimizations developed at Meta, we demonstrate the approach of using graph transformation to optimize PyTorch\u2019s", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "performance for production.", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "2. Background\n\nEmbedding tables are ubiquitous in recommendation systems. Section 3 will discuss three FX transformations that optimize accesses to embedding tables. In this section, we provide some background on FX (Section 2.1) and embedding tables (Section 2.2).", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "2.1 FX\n\nFigure 1 is a simple example adopted from [3] which illustrates using FX to transform a PyTorch program. It contains three steps: (1) capturing the graph from a program, (2) modifying the graph (in this example, all uses of RELU are replaced by GELU), and (3) generating a new program from the modified graph.\n\n\n
\n
\n\n**Figure 1: A FX example which replaces all uses of RELU by GELU in a PyTorch module.**", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "The FX API [4] provides many more functionalities for inspecting and transforming PyTorch program graphs.", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "2.2 Embedding Tables\n\n\n
\n
\n\n**Figure 2: Illustration of an embedding table for a sparse feature with batch size = 1**", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "In a recommendation system, sparse features (e.g., User ID, Story ID) are represented by embedding tables. An embedding table E is an HxD matrix, where H is the hash size, D is the embedding dimension. Each row of E is a vector of floats. Feature hashing [5] is used to map a sparse feature to a list of indices to E, say [S1,S2, \u2026, Sk], where 0<=Si<H. Its output value is computed as f(E[S1], E[S2], \u2026, E[Sk]), where", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "E[Si] is the vector at row Si, and f is called the pooling function, which is typically one of the following functions: sum, average, maximum. See Figure 2 for an illustration.", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "To fully utilize the GPU, sparse features are usually processed in a batch. Each entity in a batch has its own list of indices. If a batch has B entities, a naive representation has B lists of indices. A more compact representation is to combine the B lists of indices into a single list of indices and add a list of the lengths of indices (one length for each entity in the batch). For example, if a batch has 3 entities whose lists of indices are as follows:", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "- Entity 1: indices = [10, 20]\n- Entity 2: indices = [5, 9, 77, 81]\n- Entity 3: indices = [15, 20, 45]\n\nThen the indices and lengths for the entire batch will be:\n\n- Indices = [10, 20, 5, 9, 77, 81, 15, 20, 45]\n- Lengths = [2, 4, 3]\n\nAnd the output of the embedding table lookup for the whole batch is a BxD matrix.", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "3. Three FX Transformations\n\nWe have developed three FX transformations that accelerate accesses to embedding tables. Section 3.1 discusses a transformation that combines multiple small input tensors into a single big tensor; Section 3.2 a transformation that fuses multiple, parallel compute chains into a single compute chain; and Section 3.3 a transformation that overlaps communication with computation.", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "3.1 Combining Input Sparse Features", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "Recall that an input sparse feature in a batch is represented by two lists: a list of indices and a list of B lengths, where B is the batch size. In PyTorch, these two lists are implemented as two tensors. When a PyTorch model is run on a GPU, embedding tables are commonly stored in the GPU memory (which is closer to the GPU and has much higher read/write bandwidth than the CPU memory). To use an input sparse feature, its two tensors need to be first copied from CPU to GPU. Nevertheless, per host-to-device", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "memory copying requires a kernel launch, which is relatively expensive compared to the actual data transfer time. If a model uses many input sparse features, this copying could become a performance bottleneck (e.g., 1000 input sparse features would require copying 2000 tensors from host to device).", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "An optimization that reduces the number of host-to-device memcpy is to combine multiple input sparse features before sending them to the device. For instance, given the following three input features:\n\n- Feature_A: indices = [106, 211, 7], lengths = [2, 1]\n- Feature_B: indices = [52, 498, 616, 870, 1013], lengths = [3, 2]\n- Feature_C: indices = [2011, 19, 351, 790], lengths = [1, 3]\n\nThe combined form is:", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "- Features_A_B_C: indices = [106, 211, 7, 52, 498, 616, 870, 1013, 2011, 19, 351, 790], lengths = [2, 1, 3, 2, 1, 3]\n\nSo, instead of copying 3x2=6 tensors from host to device, we only need to copy 2 tensors.\n\nFigure 3(b) describes an implementation of this optimization, which has two components:", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "- On the CPU side: The input pipeline is modified to combine all the indices of sparse features into a single tensor and similarly all the lengths into another tensor. Then the two tensors are copied to the GPU.\n- On the GPU side: Using FX, we insert a Permute_and_Split op into the model graph to recover the indices and lengths tensors of individual features from the combined tensors, and route them to the corresponding nodes downstream.", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n(a). **Without the optimization**\n\n\n
\n
\n\n(b). **With the optimization**\n\n**Figure 3: Combining input sparse features**", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "3.2 Horizontal fusion of computation chains started with accesses to embedding tables", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "In a production model, it is fairly common to have 10s of embedding tables residing on each GPU. For performance reasons, lookups to these tables are grouped together so that their outputs are concatenated in a single big tensor (see the red part in Figure 4(a)). To apply computations to individual feature outputs, a Split op is used to divide the big tensors into N smaller tensors (where N is the number of features) and then the desired computations are applied to each tensor. This is shown in Figure 4(a),", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "where the computation applied to each feature output O is Tanh(LayerNorm(O)). All the computation results are concatenated back to a big tensor, which is then passed to downstream ops (Op1 in Figure 4(a)).", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
+{"page_content": "BetterTransformer includes two types of optimization: (1) fused kernels implementing multiple operations more efficiently in a single kernel, and (2) exploiting sparsity by avoiding unnecessary processing on padding tokens. Enhanced performance for small input sizes benefits primarily from the fused kernel implementations, and shows a constant performance improvement regardless of padding amount. While large inputs still benefit from fused kernels, the computation heavy processing limits the benefits that may be obtained by the fused kernels as baseline performance is already closer to the theoretical peak. However, as we increase the amount of padding, performance increases dramatically as increasingly large amounts of computation can be avoided by exploiting the sparsity introduced by padding in NLP workloads.\n\n## Future Work", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
+{"page_content": "## Future Work\n\nAs part of our ongoing work on PyTorch BetterTransformer, we are working on extending BetterTransformer improvements to Transformer Decoders. We aim to expand beyond inference to training as well.\n\nWe are partnering to enable BetterTransformer on additional libraries such as FairSeq, MetaSeq, and HuggingFace to benefit all Transformer-based PyTorch models. We\u2019ll provide future updates on the progress of BetterTransformer accelerations for the larger PyTorch ecosystem as part of this blog series.\n\nAcknowledgements: The authors would like to thank Lin Qiao, Ajit Mathews, Andrew Tulloch, Dmytro Dzhulgakov, Natalia Gimelshein, Emad El-Haraty, Mark Saroufim, Adnan Aziz, Geeta Chauhan, and Hamid Shojanazeri for their support, contributions and many helpful suggestions throughout the course of this project, and in the preparation of this blog.", "metadata": {"source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"Experience the power of PyTorch 2.0 on AMD Solutions\"\nauthor: AMD\n---\n\nPyTorch 2.0 represents a significant step forward for the PyTorch machine learning framework. The stable release of PyTorch 2.0 brings new features that unlock even higher performance, while remaining backward compatible with prior releases and retaining the Pythonic focus which has helped to make PyTorch so enthusiastically adopted by the AI/ML community. AMD has long been a strong proponent of PyTorch, and we are delighted that the PyTorch 2.0 stable release includes support for AMD Instinct\u2122 and Radeon\u2122 GPUs that are supported by the ROCm\u2122 software platform.", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
+{"page_content": "With the stable PyTorch 2.0 release, PyTorch 2.0 introduces torch.compile as a beta feature underpinned by TorchInductor with support for AMD Instinct and Radeon GPUs through OpenAI Triton deep learning compiler. Through TorchInductor, developers can now generate low level kernels using Triton that are portable and performant to hand-written kernels on native hardware centric kernel programming models.\n\nOpenAI Triton is a language and compiler for blocked algorithms, which aims to provide an abstraction layer between CUDA/HIP and Torch at which developers can write efficient kernels more productively. We have written a new backend which interfaces Triton's custom MLIR dialects with our ROCm compiler stack.", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
+{"page_content": "Triton can automatically optimize kernels generated by machine learning compilers such as TorchInductor for multiple AI accelerators including AMD Instinct GPU accelerator by leveraging hardware-specific features of the AMD CDNA\u2122 GPU architecture. This makes it easy for developers and users to switch seamlessly from any HW to AMD Instinct GPU accelerators and get great out of the box performance. \n\nIn addition, compilers like Triton can also enable developers to use high-level programming languages, such as Python, to write machine learning code that can be efficiently compiled and executed on specialized hardware. This can help greatly improve the productivity of machine learning developers, as they can focus on the algorithmic aspects of their models and rely on the compiler to generate efficient code.", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
+{"page_content": "By design, PyTorch 2.0 is backward compatible to earlier PyTorch releases. This holds true for the ROCm build of PyTorch 2.0 as well. Developers using PyTorch with AMD GPUs can migrate to PyTorch 2.0 with the confidence that their existing code will continue to work without any required changes, so there is no penalty to access the improvements that come with this release. On the other hand, using PyTorch 2.0 and TorchInductor can result in significant performance improvement over the default eager-mode as shown below.", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
+{"page_content": "The initial results using AMD Instinct MI250 GPUs already shows strong performance improvement with minimal optimization on TorchInductor compared to the default eager-mode. We see an average performance increase of up to 1.54X on 44 out of the 45 models on HuggingFace benchmarks suite with CamemBert, DistillGPT2 and T5Small being a few of the standout models with up to 1.5X or more performance improvement over eager-mode. We are looking forward to continued engagement with members of the PyTorch team at Meta to enable further optimization on ROCm software stack and the additional performance improvement for future PyTorch releases. \n\n{:style=\"max-height:800px; width:100%\"} \n\nImage 1: AMD MI250 GPU performance improvement for TorchInductor vs eager-mode using HuggingFace MI200-89.", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
+{"page_content": "PyTorch 2.0 follows the same set of install options as before to build and install for supporting AMD GPUs. These include an installable Python package hosted at [pytorch.org](https://pytorch.org/), AMD\u2019s public PyTorch docker image, and of course the option to build from source using the upstream PyTorch repository. As with PyTorch builds for other platforms, the specific command line to be run for pip-based install is provided by the configurator at [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/).\n\nThe GPUs supported by the ROCm software platform which forms the basis for PyTorch support on AMD GPUs are documented at [https://docs.amd.com/bundle/Hardware_and_Software_Reference_Guide/page/Hardware_and_Software_Support.html](https://docs.amd.com/bundle/Hardware_and_Software_Reference_Guide/page/Hardware_and_Software_Support.html)\n\n## Conclusion", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
+{"page_content": "## Conclusion\n\nPyTorch 2.0 represents a major step in continuing to broaden support for ML developers by increasing performance while maintaining a simple, Pythonic interface. This performance uplift is made possible in large part by the new TorchInductor infrastructure, which in turn harnesses the Triton ML programming language and just-in-time compiler. AMD\u2019s support for these technologies allows users to realize the full promise of the new PyTorch architecture. Our GPU support in PyTorch 2.0 is just one manifestation of a larger vision around AI and machine learning. AI/ML plays an important role in multiple AMD product lines, including Instinct and Radeon GPUs, Alveo\u2122 data center accelerators, and both Ryzen\u2122 and EPYC processors. These hardware and software initiatives are all part of AMD\u2019s Pervasive AI vision, and we look forward to addressing the many new challenges and opportunities of this dynamic space.", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
+{"page_content": "MI200-89 \u2013 PyTorch Inductor mode HuggingFace Transformers training speedup, running the standard PyTorch 2.0 test suite, over PyTorch eager-mode comparison based on AMD internal testing on a single GCD as of 3/10/2023 using a 2P AMD EPYC\u2122 7763 production server with 4x AMD Instinct\u2122 MI250 (128GB HBM2e) 560W GPUs with Infinity Fabric\u2122 technology; host ROCm\u2122 5.3, guest ROCm\u2122 5.4.4, PyTorch 2.0.0, Triton 2.0. Server manufacturers may vary configurations, yielding different results. Performance may vary based on factors including use of latest drivers and optimizations. \n\n\u00a9 2023 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, AMD CDNA, AMD Instinct, EPYC, Radeon, ROCm, Ryzen, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other product names used in this publication are for identification purposes only and may be trademarks of their respective owners.", "metadata": {"source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'Announcing PyTorch Developer Day 2021'\nauthor: Team PyTorch\nfeatured-img: 'assets/images/ptdevday21.gif'\n---\n\nWe are excited to announce PyTorch Developer Day (#PTD2), taking place virtually from December 1 & 2, 2021. Developer Day is designed for developers and users to discuss core technical developments, ideas, and roadmaps. \n\n\n

\n
\n\n## Event Details \n**Technical Talks Live Stream - December 1, 2021**\n\nJoin us for technical talks on a variety of topics, including updates to the core framework, new tools and libraries to support development across a variety of domains, responsible AI and industry use cases. All talks will take place on December 1 and will be live streamed on PyTorch channels.", "metadata": {"source": "https://pytorch.org/blog/pytorch-developer-day-2021/", "category": "pytorch blogs"}}
+{"page_content": "Stay up to date by following us on our social channels: [Twitter](https://twitter.com/PyTorch), [Facebook](https://facebook.com/PyTorch), or [LinkedIn](https://www.linkedin.com/company/pytorch).\n\n**Poster Exhibition & Networking - December 2, 2021**\n\nOn the second day, we\u2019ll be hosting an online poster exhibition on Gather.Town. There will be opportunities to meet the authors and learn more about their PyTorch projects as well as network with the community. This poster and networking event is limited to people composed of PyTorch maintainers and contributors, long-time stakeholders and experts in areas relevant to PyTorch\u2019s future. Conversations from the networking event will strongly shape the future of PyTorch. As such, invitations are required to attend the networking event. \n\nApply for an invitation to the networking event by clicking [here](https://pytorchdeveloperday.fbreg.com/).\n\n## Call for Content Now Open", "metadata": {"source": "https://pytorch.org/blog/pytorch-developer-day-2021/", "category": "pytorch blogs"}}
+{"page_content": "Submit your poster abstracts today! Please send us the title and brief summary of your project, tools and libraries that could benefit PyTorch researchers in academia and industry, application developers, and ML engineers for consideration. The focus must be on academic papers, machine learning research, or open-source projects related to PyTorch development, Responsible AI or Mobile. Please no sales pitches. **Deadline for submission is September 24, 2021**. \n\nYou can submit your poster abstract during your application & registration process [here](https://pytorchdeveloperday.fbreg.com/apply).\n\nVisit the [event website](https://pytorchdeveloperday.fbreg.com/) for more information and we look forward to having you at PyTorch Developer Day. For any questions about the event, contact [pytorch@fbreg.com](mailto:pytorch@fbreg.com).", "metadata": {"source": "https://pytorch.org/blog/pytorch-developer-day-2021/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'Efficient PyTorch: Tensor Memory Format Matters'\nauthor: 'Dhruv Matani, Suraj Subramanian'\nfeatured-img: ''\n---\n\nEnsuring the right memory format for your inputs can significantly impact the running time of your PyTorch vision models. When in doubt, choose a Channels Last memory format.\n\nWhen dealing with vision models in PyTorch that accept multimedia (for example image Tensorts) as input, the Tensor\u2019s memory format can significantly impact **the inference execution speed of your model on mobile platforms when using the CPU backend along with XNNPACK**. This holds true for training and inference on server platforms as well, but latency is particularly critical for mobile devices and users.\n\n", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
+{"page_content": "## Outline of this article\n1. Deep Dive into matrix storage/memory representation in C++. Introduction to [Row and Column major order](https://en.wikipedia.org/wiki/Row-_and_column-major_order).\n2. Impact of looping over a matrix in the same or different order as the storage representation, along with an example.\n3. Introduction to Cachegrind; a tool to inspect the cache friendliness of your code.\n4. Memory formats supported by PyTorch Operators.\n5. Best practices example to ensure efficient model execution with XNNPACK optimizations\n\n## Matrix Storage Representation in C++\n\nImages are fed into PyTorch ML models as multi-dimensional Tensors. These Tensors have specific memory formats. To understand this concept better, let\u2019s take a look at how a 2-d matrix may be stored in memory.", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
+{"page_content": "Broadly speaking, there are 2 main ways of efficiently storing multi-dimensional data in memory.\n1. **Row Major Order:** In this format, the matrix is stored in row order, with each row stored before the next row in memory. I.e. row N comes before row N+1.\n2. **Column Major Order:** In this format, the matrix is stored in column-order, with each column stored before the next column in memory. I.e. column N comes before column N+1.\n\nYou can see the differences graphically below.\n\n\n
\n
\nC++ stores multi-dimensional data in row-major format.\n
\n\n## Efficiently accessing elements of a 2d matrix\n\nSimilar to the storage format, there are 2 ways to access data in a 2d matrix.", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
+{"page_content": "1. **Loop Over Rows first:** All elements of a row are processed before any element of the next row.\n2. **Loop Over Columns first:** All elements of a column are processed before any element of the next column.\n\nFor maximum efficiency, one should always access data in the same format in which it is stored. I.e. if the data is stored in row-major order, then one should try to access it in that order.\n\nThe code below (main.cpp) shows [2 ways](https://stackoverflow.com/questions/9936132/why-does-the-order-of-the-loops-affect-performance-when-iterating-over-a-2d-arra) of accessing all the elements of a 2d 4000x4000 matrix.\n\n```python\n#include \n#include \n\n// loop1 accesses data in matrix 'a' in row major order,\n// since i is the outer loop variable, and j is the\n// inner loop variable.\nint loop1(int a[4000][4000]) {\n int s = 0;\n for (int i = 0; i < 4000; ++i) {\n for (int j = 0; j < 4000; ++j) {\n s += a[i][j];\n }\n }\n return s;\n}", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
+{"page_content": "// loop2 accesses data in matrix 'a' in column major order\n// since j is the outer loop variable, and i is the\n// inner loop variable.\nint loop2(int a[4000][4000]) {\n int s = 0;\n for (int j = 0; j < 4000; ++j) {\n for (int i = 0; i < 4000; ++i) {\n s += a[i][j];\n }\n }\n return s;\n}\n\nint main() {\n static int a[4000][4000] = {0};\n for (int i = 0; i < 100; ++i) {\n int x = rand() % 4000;\n int y = rand() % 4000;\n a[x][y] = rand() % 1000;\n }\n\n auto start = std::chrono::high_resolution_clock::now();\n auto end = start;\n int s = 0;\n\n#if defined RUN_LOOP1\n start = std::chrono::high_resolution_clock::now();\n\n s = 0;\n for (int i = 0; i < 10; ++i) {\n s += loop1(a);\n s = s % 100;\n }\n end = std::chrono::high_resolution_clock::now();\n\n std::cout << \"s = \" << s << std::endl;\n std::cout << \"Time for loop1: \"\n << std::chrono::duration(end - start).count()\n << \"ms\" << std::endl;\n#endif", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
+{"page_content": "#if defined RUN_LOOP2\n start = std::chrono::high_resolution_clock::now();\n s = 0;\n for (int i = 0; i < 10; ++i) {\n s += loop2(a);\n s = s % 100;\n }\n end = std::chrono::high_resolution_clock::now();\n\n std::cout << \"s = \" << s << std::endl;\n std::cout << \"Time for loop2: \"\n << std::chrono::duration(end - start).count()\n << \"ms\" << std::endl;\n#endif\n}\n\n\nLet\u2019s build and run this program and see what it prints.\n\ng++ -O2 main.cpp -DRUN_LOOP1 -DRUN_LOOP2\n./a.out\n\n\nPrints the following:\n\ns = 70\nTime for loop1: 77.0687ms\ns = 70\nTime for loop2: 1219.49ms\n```\n\nloop1() is **15x faster** than loop2(). Why is that? Let\u2019s find out below!\n\n## Measure cache misses using Cachegrind\n\n[Cachegrind](https://courses.cs.washington.edu/courses/cse326/05wi/valgrind-doc/cg_main.html) is a cache profiling tool used to see how many I1 (first level instruction), D1 (first level data), and LL (last level) cache misses your program caused.", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
+{"page_content": "Let\u2019s build our program with just loop1() and just loop2() to see how cache friendly each of these functions is.\n\n### Build and run/profile just loop1()\n\n```python\ng++ -O2 main.cpp -DRUN_LOOP1\nvalgrind --tool=cachegrind ./a.out\n```\n\n#### Prints:", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
+{"page_content": "#### Prints:\n\n```python\n==3299700==\n==3299700== I refs: 643,156,721\n==3299700== I1 misses: 2,077\n==3299700== LLi misses: 2,021\n==3299700== I1 miss rate: 0.00%\n==3299700== LLi miss rate: 0.00%\n==3299700==\n==3299700== D refs: 160,952,192 (160,695,444 rd + 256,748 wr)\n==3299700== D1 misses: 10,021,300 ( 10,018,723 rd + 2,577 wr)\n==3299700== LLd misses: 10,010,916 ( 10,009,147 rd + 1,769 wr)\n==3299700== D1 miss rate: 6.2% ( 6.2% + 1.0% )\n==3299700== LLd miss rate: 6.2% ( 6.2% + 0.7% )\n==3299700==\n==3299700== LL refs: 10,023,377 ( 10,020,800 rd + 2,577 wr)\n==3299700== LL misses: 10,012,937 ( 10,011,168 rd + 1,769 wr)\n==3299700== LL miss rate: 1.2% ( 1.2% + 0.7% )\n```\n\n### Build and run/profile just loop2()\n\n\n```python\ng++ -O2 main.cpp -DRUN_LOOP2\nvalgrind --tool=cachegrind ./a.out\n```\n\n#### Prints:", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
+{"page_content": "#### Prints:\n\n```python\n==3300389==\n==3300389== I refs: 643,156,726\n==3300389== I1 misses: 2,075\n==3300389== LLi misses: 2,018\n==3300389== I1 miss rate: 0.00%\n==3300389== LLi miss rate: 0.00%\n==3300389==\n==3300389== D refs: 160,952,196 (160,695,447 rd + 256,749 wr)\n==3300389== D1 misses: 160,021,290 (160,018,713 rd + 2,577 wr)\n==3300389== LLd misses: 10,014,907 ( 10,013,138 rd + 1,769 wr)\n==3300389== D1 miss rate: 99.4% ( 99.6% + 1.0% )\n==3300389== LLd miss rate: 6.2% ( 6.2% + 0.7% )\n==3300389==\n==3300389== LL refs: 160,023,365 (160,020,788 rd + 2,577 wr)\n==3300389== LL misses: 10,016,925 ( 10,015,156 rd + 1,769 wr)\n==3300389== LL miss rate: 1.2% ( 1.2% + 0.7% )\n```\n\nThe main differences between the 2 runs are:\n1. **D1 misses:** 10M v/s 160M\n2. **D1 miss rate:** 6.2% v/s 99.4%", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
+{"page_content": "As you can see, `loop2()` causes many many more (**~16x more**) L1 data cache misses than loop1(). This is why `loop1()` is ~15x faster than loop2().\n\n## Memory Formats supported by PyTorch Operators\n\nWhile PyTorch operators expect all tensors to be in [Channels First (NCHW) dimension format](https://discuss.pytorch.org/t/why-does-pytorch-prefer-using-nchw/83637/4), PyTorch operators support 3 output [memory formats](https://github.com/pytorch/pytorch/blob/master/c10/core/MemoryFormat.h).", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
+{"page_content": "1. **Contiguous:** Tensor memory is in the same order as the tensor\u2019s dimensions.\n2. **ChannelsLast:** Irrespective of the dimension order, the 2d (image) tensor is laid out as an HWC or [NHWC](https://oneapi-src.github.io/oneDNN/dev_guide_understanding_memory_formats.html) (N: batch, H: height, W: width, C: channels) tensor in memory. The dimensions could be permuted in any order.\n3. **ChannelsLast3d:** For 3d tensors (video tensors), the memory is laid out in THWC (Time, Height, Width, Channels) or NTHWC (N: batch, T: time, H: height, W: width, C: channels) format. The dimensions could be permuted in any order.\n\nThe reason that ChannelsLast is preferred for vision models is because [XNNPACK](https://github.com/google/XNNPACK) (kernel acceleration library) used by PyTorch expects all inputs to be in **Channels Last** format, so if the input to the model isn\u2019t channels last, then it must first be converted to channels last, which is an additional operation.", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
+{"page_content": "Additionally, most PyTorch operators preserve the input tensor\u2019s memory format, so if the input is Channels First, then the operator needs to first convert to Channels Last, then perform the operation, and then convert back to Channels First.\n\nWhen you combine it with the fact that accelerated operators work better with a channels last memory format, you\u2019ll notice that having the operator return back a channels-last memory format is better for subsequent operator calls or you\u2019ll end up having every operator convert to channels-last (should it be more efficient for that specific operator).\n\nFrom the XNNPACK home page:\n\n> \u201cAll operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension\".\n\n## PyTorch Best Practice\n\nThe best way to get the most performance from your PyTorch vision models is to ensure that your input tensor is in a **Channels Last** [memory format](https://pytorch.org/tutorials/intermediate/memory_format_tutorial.html) before it is fed into the model.", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
+{"page_content": "You can get even more speedups by optimizing your model to use the XNNPACK backend (by simply calling `optimize_for_mobile()` on your torchscripted model). Note that XNNPACK models will run slower if the inputs are contiguous, so definitely make sure it is in Channels-Last format.\n\n## Working example showing speedup\n\nRun this example on [Google Colab](https://colab.research.google.com/gist/suraj813/ad9aebcbffbdd6d02b23ca7231130a30/channels-last-with-xnnpack.ipynb#scrollTo=xvJN73YWXgDF) - note that runtimes on colab CPUs might not reflect accurate performance; it is recommended to run this code on your local machine.\n\n```python\nimport torch\nfrom torch.utils.mobile_optimizer import optimize_for_mobile\nimport torch.backends.xnnpack\nimport time\n\nprint(\"XNNPACK is enabled: \", torch.backends.xnnpack.enabled, \"\\n\")\n\nN, C, H, W = 1, 3, 200, 200\nx = torch.rand(N, C, H, W)\nprint(\"Contiguous shape: \", x.shape)\nprint(\"Contiguous stride: \", x.stride())\nprint()", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
+{"page_content": "xcl = x.to(memory_format=torch.channels_last)\nprint(\"Channels-Last shape: \", xcl.shape)\nprint(\"Channels-Last stride: \", xcl.stride())\n\n## Outputs:\n \n# XNNPACK is enabled: True\n \n# Contiguous shape: torch.Size([1, 3, 200, 200])\n# Contiguous stride: (120000, 40000, 200, 1)\n \n# Channels-Last shape: torch.Size([1, 3, 200, 200])\n# Channels-Last stride: (120000, 1, 600, 3)\n\n```\n\nThe input shape stays the same for contiguous and channels-last formats. Internally however, the tensor's layout has changed as you can see in the strides. Now, the number of jumps required to go across channels is only 1 (instead of 40000 in the contiguous tensor).\nThis better data locality means convolution layers can access all the channels for a given pixel much faster. Let's see now how the memory format affects runtime:\n\n```python\nfrom torchvision.models import resnet34, resnet50, resnet101\n\nm = resnet34(pretrained=False)\n# m = resnet50(pretrained=False)\n# m = resnet101(pretrained=False)", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
+{"page_content": "def get_optimized_model(mm):\n mm = mm.eval()\n scripted = torch.jit.script(mm)\n optimized = optimize_for_mobile(scripted) # explicitly call the xnnpack rewrite \n return scripted, optimized\n\n\ndef compare_contiguous_CL(mm):\n # inference on contiguous\n start = time.perf_counter()\n for i in range(20):\n mm(x)\n end = time.perf_counter()\n print(\"Contiguous: \", end-start)\n\n # inference on channels-last\n start = time.perf_counter()\n for i in range(20):\n mm(xcl)\n end = time.perf_counter()\n print(\"Channels-Last: \", end-start)\n\nwith torch.inference_mode():\n scripted, optimized = get_optimized_model(m)\n\n print(\"Runtimes for torchscripted model: \")\n compare_contiguous_CL(scripted.eval())\n print()\n print(\"Runtimes for mobile-optimized model: \")\n compare_contiguous_CL(optimized.eval())", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
+{"page_content": "## Outputs (on an Intel Core i9 CPU):\n \n# Runtimes for torchscripted model:\n# Contiguous: 1.6711160129999598\n# Channels-Last: 1.6678222839999535\n \n# Runtimes for mobile-optimized model:\n# Contiguous: 0.5712863490000473\n# Channels-Last: 0.46113000699995155\n\n```\n\n## Conclusion\n\nThe Memory Layout of an input tensor can significantly impact a model\u2019s running time. For Vision Models, prefer a **Channels Last** memory format to get the most out of your PyTorch models.\n\n## References", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
+{"page_content": "## References\n\n- [Row/Column Major matrix storage order](https://en.wikipedia.org/wiki/Row-_and_column-major_order)\n- [Loop order impact on performance](https://stackoverflow.com/questions/9936132/why-does-the-order-of-the-loops-affect-performance-when-iterating-over-a-2d-arra)\n- [Cachegrind: a cache-miss profiler](https://courses.cs.washington.edu/courses/cse326/05wi/valgrind-doc/cg_main.html)\n- [NHWC format explained](https://oneapi-src.github.io/oneDNN/dev_guide_understanding_memory_formats.html)\n- [Why does PyTorch prefer NCHW?](https://discuss.pytorch.org/t/why-does-pytorch-prefer-using-nchw/83637/4)\n- [XNNPACK](https://github.com/google/XNNPACK)\n- [PyTorch memory format tutorial](https://pytorch.org/tutorials/intermediate/memory_format_tutorial.html)\n- [Supported operators](https://github.com/pytorch/pytorch/wiki/Operators-with-Channels-Last-support)", "metadata": {"source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch framework for cryptographically secure random number generation, torchcsprng, now available'\nauthor: Team PyTorch\n---\n\nOne of the key components of modern cryptography is the pseudorandom number generator. Katz and Lindell stated, \"The use of badly designed or inappropriate random number generators can often leave a good cryptosystem vulnerable to attack. Particular care must be taken to use a random number generator that is designed for cryptographic use, rather than a 'general-purpose' random number generator which may be fine for some applications but not ones that are required to be cryptographically secure.\"[1] Additionally, most pseudorandom number generators scale poorly to massively parallel high-performance computation because of their sequential nature. Others don\u2019t satisfy cryptographically secure properties.", "metadata": {"source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"}}
+{"page_content": "[torchcsprng](https://github.com/pytorch/csprng) is a PyTorch [C++/CUDA extension](https://pytorch.org/tutorials/advanced/cpp_extension.html) that provides [cryptographically secure pseudorandom number generators](https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator) for PyTorch.\n\n## torchcsprng overview \n\nHistorically, PyTorch had only two pseudorandom number generator implementations: Mersenne Twister for CPU and Nvidia\u2019s cuRAND Philox for CUDA. Despite good performance properties, neither of them are suitable for cryptographic applications. Over the course of the past several months, the PyTorch team developed the torchcsprng extension API. Based on PyTorch dispatch mechanism and operator registration, it allows the users to extend c10::GeneratorImpl and implement their own custom pseudorandom number generator.", "metadata": {"source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"}}
+{"page_content": "torchcsprng generates a random 128-bit key on the CPU using one of its generators and then runs AES128 in [CTR mode](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Counter_(CTR)) either on CPU or GPU using CUDA. This then generates a random 128-bit state and applies a transformation function to map it to target tensor values. This approach is based on [Parallel Random Numbers: As Easy as 1, 2, 3 (John K. Salmon, Mark A. Moraes, Ron O. Dror, and David E. Shaw, D. E. Shaw Research)](http://www.thesalmons.org/john/random123/papers/random123sc11.pdf). It makes torchcsprng both crypto-secure and parallel on both CPU and CUDA.\n\n\n

\n
\n\nSince torchcsprng is a PyTorch extension, it is available on the platforms where PyTorch is available (support for Windows-CUDA will be available in the coming months). \n\n## Using torchcsprng", "metadata": {"source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"}}
+{"page_content": "## Using torchcsprng\n\nThe torchcsprng API is very simple to use and is fully compatible with the PyTorch random infrastructure:\n\n**Step 1: Install via binary distribution**\n\nAnaconda:\n\n```python\nconda install torchcsprng -c pytorch\n ```\n\npip:\n\n```python\npip install torchcsprng\n ```\n\n**Step 2: import packages as usual but add csprng**\n\n```python\nimport torch\nimport torchcsprng as csprng\n ```\n\n**Step 3: Create a cryptographically secure pseudorandom number generator from /dev/urandom:**\n\n```python\nurandom_gen = csprng.create_random_device_generator('/dev/urandom')\n ```\n \nand simply use it with the existing PyTorch methods:\n\n```python\ntorch.randn(10, device='cpu', generator=urandom_gen)\n ```\n\n**Step 4: Test with Cuda**\n\nOne of the advantages of torchcsprng generators is that they can be used with both CPU and CUDA tensors:\n\n```python\ntorch.randn(10, device='cuda', generator=urandom_gen)\n ```\n\nAnother advantage of torchcsprng generators is that they are parallel on CPU unlike the default PyTorch CPU generator.", "metadata": {"source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"}}
+{"page_content": "## Getting Started\n\nThe easiest way to get started with torchcsprng is by visiting the [GitHub page](https://github.com/pytorch/csprng) where you can find installation and build instructions, and more how-to examples. \n\nCheers,\n\nThe PyTorch Team\n\n[1] [Introduction to Modern Cryptography: Principles and Protocols (Chapman & Hall/CRC Cryptography and Network Security Series)](https://www.amazon.com/Introduction-Modern-Cryptography-Principles-Protocols/dp/1584885513) by Jonathan Katz and Yehuda Lindell", "metadata": {"source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"Optimizing Production PyTorch Models\u2019 Performance with Graph Transformations\"\nauthor: Jade Nie, CK Luk, Xiaodong Wang, Jackie (Jiaqi) Xu\nfeatured-img: \"assets/images/blog1-3b.png\"\n---\n\n## 1. Introduction\n\nPyTorch supports two execution modes [1]: eager mode and graph mode. In eager mode, operators in a model are immediately executed as they are encountered. In contrast, in graph mode, operators are first synthesized into a graph, which will then be compiled and executed as a whole. Eager mode is easier to use, more suitable for ML researchers, and hence is the default mode of execution. On the other hand, graph mode typically delivers higher performance and hence is heavily used in production.", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
+{"page_content": "Specifically, graph mode enables operator fusion [2], wherein one operator is merged with another to reduce/localize memory reads as well as total kernel launch overhead. Fusion can be horizontal\u2014taking a single operation (e.g., BatchNorm) that is independently applied to many operands and merging those operands into an array; and vertical\u2014merging a kernel with another kernel that consumes the output of the first kernel (e.g., Convolution followed by ReLU).\n\nTorch.FX [3, 4] (abbreviated as FX) is a publicly available toolkit as part of the PyTorch package that supports graph mode execution. In particular, it (1) captures the graph from a PyTorch program and (2) allows developers to write transformations on the captured graph. It is used inside Meta to optimize the training throughput of production models. By introducing a number of FX-based optimizations developed at Meta, we demonstrate the approach of using graph transformation to optimize PyTorch\u2019s performance for production.\n\n## 2. Background", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
+{"page_content": "## 2. Background\n\nEmbedding tables are ubiquitous in recommendation systems. Section 3 will discuss three FX transformations that optimize accesses to embedding tables. In this section, we provide some background on FX (Section 2.1) and embedding tables (Section 2.2).\n\n### 2.1 FX\n\nFigure 1 is a simple example adopted from [3] which illustrates using FX to transform a PyTorch program. It contains three steps: (1) capturing the graph from a program, (2) modifying the graph (in this example, all uses of RELU are replaced by GELU), and (3) generating a new program from the modified graph.\n\n\n
\n
\n\n**Figure 1: A FX example which replaces all uses of RELU by GELU in a PyTorch module.**\n\nThe FX API [4] provides many more functionalities for inspecting and transforming PyTorch program graphs.\n\n### 2.2 Embedding Tables\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
+{"page_content": "**Figure 2: Illustration of an embedding table for a sparse feature with batch size = 1**\n\nIn a recommendation system, sparse features (e.g., User ID, Story ID) are represented by embedding tables. An embedding table E is an HxD matrix, where H is the hash size, D is the embedding dimension. Each row of E is a vector of floats. Feature hashing [5] is used to map a sparse feature to a list of indices to E, say [S1,S2, \u2026, Sk], where 0<=Si<H. Its output value is computed as f(E[S1], E[S2], \u2026, E[Sk]), where E[Si] is the vector at row Si, and f is called the pooling function, which is typically one of the following functions: sum, average, maximum. See Figure 2 for an illustration.", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
+{"page_content": "To fully utilize the GPU, sparse features are usually processed in a batch. Each entity in a batch has its own list of indices. If a batch has B entities, a naive representation has B lists of indices. A more compact representation is to combine the B lists of indices into a single list of indices and add a list of the lengths of indices (one length for each entity in the batch). For example, if a batch has 3 entities whose lists of indices are as follows:\n\n- Entity 1: indices = [10, 20]\n- Entity 2: indices = [5, 9, 77, 81]\n- Entity 3: indices = [15, 20, 45]\n\nThen the indices and lengths for the entire batch will be:\n\n- Indices = [10, 20, 5, 9, 77, 81, 15, 20, 45]\n- Lengths = [2, 4, 3]\n\nAnd the output of the embedding table lookup for the whole batch is a BxD matrix.\n\n## 3. Three FX Transformations", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
+{"page_content": "We have developed three FX transformations that accelerate accesses to embedding tables. Section 3.1 discusses a transformation that combines multiple small input tensors into a single big tensor; Section 3.2 a transformation that fuses multiple, parallel compute chains into a single compute chain; and Section 3.3 a transformation that overlaps communication with computation.\n\n### 3.1 Combining Input Sparse Features", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
+{"page_content": "Recall that an input sparse feature in a batch is represented by two lists: a list of indices and a list of B lengths, where B is the batch size. In PyTorch, these two lists are implemented as two tensors. When a PyTorch model is run on a GPU, embedding tables are commonly stored in the GPU memory (which is closer to the GPU and has much higher read/write bandwidth than the CPU memory). To use an input sparse feature, its two tensors need to be first copied from CPU to GPU. Nevertheless, per host-to-device memory copying requires a kernel launch, which is relatively expensive compared to the actual data transfer time. If a model uses many input sparse features, this copying could become a performance bottleneck (e.g., 1000 input sparse features would require copying 2000 tensors from host to device).\n\nAn optimization that reduces the number of host-to-device memcpy is to combine multiple input sparse features before sending them to the device. For instance, given the following three input features:", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
+{"page_content": "- Feature_A: indices = [106, 211, 7], lengths = [2, 1]\n- Feature_B: indices = [52, 498, 616, 870, 1013], lengths = [3, 2]\n- Feature_C: indices = [2011, 19, 351, 790], lengths = [1, 3]\n\nThe combined form is:\n\n- Features_A_B_C: indices = [106, 211, 7, 52, 498, 616, 870, 1013, 2011, 19, 351, 790], lengths = [2, 1, 3, 2, 1, 3]\n\nSo, instead of copying 3x2=6 tensors from host to device, we only need to copy 2 tensors.\n\nFigure 3(b) describes an implementation of this optimization, which has two components:\n\n- On the CPU side: The input pipeline is modified to combine all the indices of sparse features into a single tensor and similarly all the lengths into another tensor. Then the two tensors are copied to the GPU.\n- On the GPU side: Using FX, we insert a Permute_and_Split op into the model graph to recover the indices and lengths tensors of individual features from the combined tensors, and route them to the corresponding nodes downstream.", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
\n\n(a). **Without the optimization**\n\n\n
\n
\n\n(b). **With the optimization**\n\n**Figure 3: Combining input sparse features**\n\n### 3.2 Horizontal fusion of computation chains started with accesses to embedding tables", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
+{"page_content": "In a production model, it is fairly common to have 10s of embedding tables residing on each GPU. For performance reasons, lookups to these tables are grouped together so that their outputs are concatenated in a single big tensor (see the red part in Figure 4(a)). To apply computations to individual feature outputs, a Split op is used to divide the big tensors into N smaller tensors (where N is the number of features) and then the desired computations are applied to each tensor. This is shown in Figure 4(a), where the computation applied to each feature output O is Tanh(LayerNorm(O)). All the computation results are concatenated back to a big tensor, which is then passed to downstream ops (Op1 in Figure 4(a)).", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
{"page_content": "The main runtime cost here is the GPU kernel launch overhead. For instance, the number of GPU kernel launches in Figure 4(a) is 2\\*N + 3 (each oval in the figure is a GPU kernel). This could become a performance issue because execution times of LayerNorm and Tanh on the GPU are short compared to their kernel launch times. In addition, the Split op may create an extra copy of the embedding output tensor, consuming additional GPU memory.", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "We use FX to implement an optimization called horizontal fusion which dramatically reduces the number of GPU kernel launches (in this example, the optimized number of GPU kernel launches is 5, see Figure 4(b)). Instead of doing an explicit Split, we use the Add_middle_dim op to reshape the 2D embedding tensor of shape (B, NxD) to a 3D tensor of shape (B, N, D). Then a single LayerNorm is applied to the last dimension of it. Then a single Tanh is applied to the result of the LayerNorm. At the end, we use the", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "Remove_middle_dim op to reshape the Tanh\u2019s result back to a 2D tensor. In addition, since Add_middle_dim and Remove_middle_dim only reshape the tensor without creating an extra copy, the amount of GPU memory consumption could be reduced as well.", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n(a). **Without the optimization**\n\n\n
\n
\n\n(b). **With the optimization**\n\n**Figure 4: Horizontal fusion**", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "3.3 Overlapping Computation with Communication\n\nTraining of a production recommendation model is typically done on a distributed GPU system. Since the capacity of the device memory per GPU is not big enough to hold all the embedding tables in the model, they need to be distributed among the GPUs.\n\nWithin a training step, a GPU needs to read/write feature values from/to the embedding tables on the other GPUs. This is known as all-to-all communication [6] and can be a major performance bottleneck.", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "We use FX to implement a transformation that can overlap computation with all-to-all communication. Figure 5(a) shows the example of a model graph which has embedding table accesses (EmbeddingAllToAll) and other ops. Without any optimization, they are sequentially executed on a GPU stream, as shown in Figure 5(b). Using FX, we break EmbeddingAllToAll into EmbeddingAllToAll_Request and EmbeddingAllToAll_Wait, and schedule independent ops in between them.", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n**(a) Model graph**\n\n\n
\n
\n\n**(b) Original execution order**\n\n\n
\n
\n\n**(c)Optimized execution order**\n\n**Figure 5: Overlapping Computation with Communication**", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "3.4 Summary\n\nTable 1 summarizes the optimizations discussed in this section and the corresponding performance bottlenecks addressed.", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "\n \n Optimization\n | \n Performance Bottleneck Addressed\n | \n
\n \n Combining Input Sparse Features\n | \n Host-to-device memory copy\n | \n
\n \n Horizontal fusion\n | \n GPU kernel launch overhead\n | \n
\n \n Overlapping Computation with Communication\n | \n Embedding all-to-all access time\n | \n
\n
", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "**Table 1: Summary of the optimizations and the performance bottlenecks addressed**\n\nWe have also developed other FX transformations which are not discussed in this section due to space limitations.\n\nTo discover which models would benefit from these transformations, we analyzed the performance data collected by MAIProf [7] from the models that run at Meta\u2019s data centers. Altogether, these transformations provide up to 2-3x of speedups compared to eager mode on a set of production models.", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "4. Concluding Remarks\n\nThe graph mode in PyTorch is preferred over the eager mode for production use for performance reasons. FX is a powerful tool for capturing and optimizing the graph of a PyTorch program. We demonstrate three FX transformations that are used to optimize production recommendation models inside Meta. We hope that this blog can motivate other PyTorch model developers to use graph transformations to boost their models\u2019 performance.\n\nReferences", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "[1] [End-to-end Machine Learning Framework](https://pytorch.org/features/)\n\n[2] [DNNFusion: Accelerating Deep Neural Networks Execution with Advanced Operator Fusion](https://arxiv.org/abs/2108.13342)\n\n[3] [Torch.FX: Practical Program Capture and Transformation for Deep Learning In Python](https://arxiv.org/pdf/2112.08429.pdf), MLSys 2022.\n\n[4] [Torch.fx\u2014PyTorch 1.12 documentation](https://pytorch.org/docs/stable/fx.html)", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "[5] [Feature Hashing for Large Scale Multitask Learning](https://alex.smola.org/papers/2009/Weinbergeretal09.pdf)\n\n[6] [NVIDIA Collective Communication Library Documentation](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/)\n\n[7] [Performance Debugging of Production PyTorch Models at Meta](https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/)", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch 1.3 adds mobile, privacy, quantization, and named tensors'\nauthor: Team PyTorch\n---", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch continues to gain momentum because of its focus on meeting the needs of researchers, its streamlined workflow for production use, and most of all because of the enthusiastic support it has received from the AI community. PyTorch citations in papers on ArXiv [grew 194 percent in the first half of 2019 alone, as noted by O\u2019Reilly](https://www.oreilly.com/ideas/one-simple-graphic-researchers-love-pytorch-and-tensorflow?fbclid=IwAR3kYmlyD7zky37IYFu0cafQn7yemhl8P-7MNyB30z0q5RDzxcTOrP8kxDk), and the", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "number of contributors to the platform has grown more than 50 percent over the last year, to nearly 1,200. Facebook, Microsoft, Uber, and other organizations across industries are increasingly using it as the foundation for their most important machine learning (ML) research and production workloads.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
+{"page_content": "We use FX to implement an optimization called horizontal fusion which dramatically reduces the number of GPU kernel launches (in this example, the optimized number of GPU kernel launches is 5, see Figure 4(b)). Instead of doing an explicit Split, we use the Add_middle_dim op to reshape the 2D embedding tensor of shape (B, NxD) to a 3D tensor of shape (B, N, D). Then a single LayerNorm is applied to the last dimension of it. Then a single Tanh is applied to the result of the LayerNorm. At the end, we use the Remove_middle_dim op to reshape the Tanh\u2019s result back to a 2D tensor. In addition, since Add_middle_dim and Remove_middle_dim only reshape the tensor without creating an extra copy, the amount of GPU memory consumption could be reduced as well.\n\n\n
\n
\n\n(a). **Without the optimization**\n\n\n
\n
\n\n(b). **With the optimization**\n\n**Figure 4: Horizontal fusion**", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
+{"page_content": "### 3.3 Overlapping Computation with Communication\n\nTraining of a production recommendation model is typically done on a distributed GPU system. Since the capacity of the device memory per GPU is not big enough to hold all the embedding tables in the model, they need to be distributed among the GPUs.\n\nWithin a training step, a GPU needs to read/write feature values from/to the embedding tables on the other GPUs. This is known as all-to-all communication [6] and can be a major performance bottleneck.\n\nWe use FX to implement a transformation that can overlap computation with all-to-all communication. Figure 5(a) shows the example of a model graph which has embedding table accesses (EmbeddingAllToAll) and other ops. Without any optimization, they are sequentially executed on a GPU stream, as shown in Figure 5(b). Using FX, we break EmbeddingAllToAll into EmbeddingAllToAll_Request and EmbeddingAllToAll_Wait, and schedule independent ops in between them.", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
\n\n**(a) Model graph**\n\n\n
\n
\n\n**(b) Original execution order**\n\n\n
\n
\n\n**(c)Optimized execution order**\n\n**Figure 5: Overlapping Computation with Communication**\n\n### 3.4 Summary\n\nTable 1 summarizes the optimizations discussed in this section and the corresponding performance bottlenecks addressed.\n\n\n \n Optimization\n | \n Performance Bottleneck Addressed\n | \n
\n \n Combining Input Sparse Features\n | \n Host-to-device memory copy\n | \n
\n \n Horizontal fusion\n | \n GPU kernel launch overhead\n | \n
\n \n Overlapping Computation with Communication\n | \n Embedding all-to-all access time\n | \n
\n
", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
+{"page_content": "**Table 1: Summary of the optimizations and the performance bottlenecks addressed**\n\nWe have also developed other FX transformations which are not discussed in this section due to space limitations.\n\nTo discover which models would benefit from these transformations, we analyzed the performance data collected by MAIProf [7] from the models that run at Meta\u2019s data centers. Altogether, these transformations provide up to 2-3x of speedups compared to eager mode on a set of production models.\n\n## 4. Concluding Remarks\n\nThe graph mode in PyTorch is preferred over the eager mode for production use for performance reasons. FX is a powerful tool for capturing and optimizing the graph of a PyTorch program. We demonstrate three FX transformations that are used to optimize production recommendation models inside Meta. We hope that this blog can motivate other PyTorch model developers to use graph transformations to boost their models\u2019 performance.\n\nReferences", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
+{"page_content": "References\n\n[1] [End-to-end Machine Learning Framework](https://pytorch.org/features/)\n\n[2] [DNNFusion: Accelerating Deep Neural Networks Execution with Advanced Operator Fusion](https://arxiv.org/abs/2108.13342)\n\n[3] [Torch.FX: Practical Program Capture and Transformation for Deep Learning In Python](https://arxiv.org/pdf/2112.08429.pdf), MLSys 2022.\n\n[4] [Torch.fx\u2014PyTorch 1.12 documentation](https://pytorch.org/docs/stable/fx.html)\n\n[5] [Feature Hashing for Large Scale Multitask Learning](https://alex.smola.org/papers/2009/Weinbergeretal09.pdf)\n\n[6] [NVIDIA Collective Communication Library Documentation](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/)\n\n[7] [Performance Debugging of Production PyTorch Models at Meta](https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/)", "metadata": {"source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch 1.3 adds mobile, privacy, quantization, and named tensors'\nauthor: Team PyTorch\n---\n\nPyTorch continues to gain momentum because of its focus on meeting the needs of researchers, its streamlined workflow for production use, and most of all because of the enthusiastic support it has received from the AI community. PyTorch citations in papers on ArXiv [grew 194 percent in the first half of 2019 alone, as noted by O\u2019Reilly](https://www.oreilly.com/ideas/one-simple-graphic-researchers-love-pytorch-and-tensorflow?fbclid=IwAR3kYmlyD7zky37IYFu0cafQn7yemhl8P-7MNyB30z0q5RDzxcTOrP8kxDk), and the number of contributors to the platform has grown more than 50 percent over the last year, to nearly 1,200. Facebook, Microsoft, Uber, and other organizations across industries are increasingly using it as the foundation for their most important machine learning (ML) research and production workloads.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
{"page_content": "We are now advancing the platform further with the release of PyTorch 1.3, which includes experimental support for features such as seamless model deployment to mobile devices, model quantization for better performance at inference time, and front-end improvements, like the ability to name tensors and create clearer code with less need for inline comments. We\u2019re also launching a number of additional tools and libraries to support model interpretability and bringing multimodal research to production.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "Additionally, we\u2019ve collaborated with Google and Salesforce to add broad support for Cloud Tensor Processing Units, providing a significantly accelerated option for training large-scale deep neural networks. [Alibaba Cloud](https://data.aliyun.com/bigdata/pai-pytorch?spm=5176.12825654.a9ylfrljh.d112.7b652c4ayuOO4M&scm=20140722.1068.1.1098&aly_as=-PvJ5e4c) also joins Amazon Web Services, Microsoft Azure, and Google Cloud as supported cloud platforms for PyTorch users. You can get started now at", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "[pytorch.org](https://pytorch.org/get-started/locally/).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "# PyTorch 1.3\n\nThe 1.3 release of PyTorch brings significant new features, including experimental support for mobile device deployment, eager mode quantization at 8-bit integer, and the ability to name tensors. With each of these enhancements, we look forward to additional contributions and improvements from the PyTorch community.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "Named tensors (experimental)\n\nCornell University\u2019s [Sasha Rush has argued](http://nlp.seas.harvard.edu/NamedTensor) that, despite its ubiquity in deep learning, the traditional implementation of tensors has significant shortcomings, such as exposing private dimensions, broadcasting based on absolute position, and keeping type information in documentation. He proposed named tensors as an alternative approach.\n\nToday, we name and access dimensions by comment:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "```python\n# Tensor[N, C, H, W]\n images = torch.randn(32, 3, 56, 56)\n images.sum(dim=1)\n images.select(dim=1, index=0)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "But naming explicitly leads to more readable and maintainable code:\n\n```python\nNCHW = [\u2018N\u2019, \u2018C\u2019, \u2018H\u2019, \u2018W\u2019]\n images = torch.randn(32, 3, 56, 56, names=NCHW)\n images.sum('C')\n images.select('C', index=0)\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "Quantization (experimental)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "It\u2019s important to make efficient use of both server-side and on-device compute resources when developing ML applications. To support more efficient deployment on servers and edge devices, PyTorch 1.3 now supports 8-bit model quantization using the familiar eager mode Python API. Quantization refers to techniques used to perform computation and storage at reduced precision, such as 8-bit integer. This currently experimental feature includes support for post-training quantization, dynamic quantization, and", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "quantization-aware training. It leverages the [FBGEMM](https://github.com/pytorch/FBGEMM) and [QNNPACK](https://github.com/pytorch/QNNPACK) state-of-the-art quantized kernel back ends, for x86 and ARM CPUs, respectively, which are integrated with PyTorch and now share a common API.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "To learn more about the design and architecture, check out the API docs [here](https://pytorch.org/docs/master/quantization.html), and get started with any of the supported techniques using the tutorials available [here](https://pytorch.org/tutorials/).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch mobile (experimental)\n\nRunning ML on edge devices is growing in importance as applications continue to demand lower latency. It is also a foundational element for privacy-preserving techniques such as federated learning. To enable more efficient on-device ML, PyTorch 1.3 now supports an end-to-end workflow from Python to deployment on iOS and Android.\n\nThis is an early, experimental release, optimized for end-to-end development. Coming releases will focus on:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "* Optimization for size: Build level optimization and selective compilation depending on the operators needed for user applications (i.e., you pay binary size for only the operators you need)\n* Performance: Further improvements to performance and coverage on mobile CPUs and GPUs\n* High level API: Extend mobile native APIs to cover common preprocessing and integration tasks needed for incorporating ML in mobile applications. e.g. Computer vision and NLP", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "Learn more or get started on Android or iOS [here](http://pytorch.org/mobile).\n\n# New tools for model interpretability and privacy", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "Captum", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "As models become ever more complex, it is increasingly important to develop new methods for model interpretability. To help address this need, we\u2019re launching Captum, a tool to help developers working in PyTorch understand why their model generates a specific output. Captum provides state-of-the-art tools to understand how the importance of specific neurons and layers and affect predictions made by the models. Captum\u2019s algorithms include integrated gradients, conductance, SmoothGrad and VarGrad, and", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "DeepLift.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "The example below shows how to apply model interpretability algorithms on a pretrained ResNet model and then visualize the attributions for each pixel by overlaying them on the image.\n\n```python\nnoise_tunnel = NoiseTunnel(integrated_gradients)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "attributions_ig_nt, delta = noise_tunnel.attribute(input, n_samples=10, nt_type='smoothgrad_sq', target=pred_label_idx)\n_ = viz.visualize_image_attr_multiple([\"original_image\", \"heat_map\"],\n [\"all\", \"positive\"],\n np.transpose(attributions_ig_nt.squeeze().cpu().detach().numpy(), (1,2,0)),\n np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "cmap=default_cmap,\n show_colorbar=True)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
\n\n

\n
\n\nLearn more about Captum at [captum.ai](https://www.captum.ai/).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "CrypTen", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "Practical applications of ML via cloud-based or machine-learning-as-a-service (MLaaS) platforms pose a range of security and privacy challenges. In particular, users of these platforms may not want or be able to share unencrypted data, which prevents them from taking full advantage of ML tools. To address these challenges, the ML community is exploring a number of technical approaches, at various levels of maturity. These include homomorphic encryption, secure multiparty computation, trusted execution", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "environments, on-device computation, and differential privacy.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "To provide a better understanding of how some of these technologies can be applied, we are releasing CrypTen, a new community-based research platform for taking the field of privacy-preserving ML forward. Learn more about CrypTen [here](https://ai.facebook.com/blog/crypten-a-new-research-tool-for-secure-machine-learning-with-pytorch). It is available on GitHub [here](https://github.com/facebookresearch/CrypTen).\n\n# Tools for multimodal AI systems", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "Digital content is often made up of several modalities, such as text, images, audio, and video. For example, a single public post might contain an image, body text, a title, a video, and a landing page. Even one particular component may have more than one modality, such as a video that contains both visual and audio signals, or a landing page that is composed of images, text, and HTML sources.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "The ecosystem of tools and libraries that work with PyTorch offer enhanced ways to address the challenges of building multimodal ML systems. Here are some of the latest libraries launching today:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "Detectron2\n\nObject detection and segmentation are used for tasks ranging from autonomous vehicles to content understanding for platform integrity. To advance this work, Facebook AI Research (FAIR) is releasing Detectron2, an object detection library now implemented in PyTorch. Detectron2 provides support for the latest models and tasks, increased flexibility to aid computer vision research, and improvements in maintainability and scalability to support production use cases.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "Detectron2 is available [here](https://github.com/facebookresearch/detectron2) and you can learn more [here](https://ai.facebook.com/blog/-detectron2-a-pytorch-based-modular-object-detection-library-).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "Speech extensions to fairseq", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "Language translation and audio processing are critical components in systems and applications such as search, translation, speech, and assistants. There has been tremendous progress in these fields recently thanks to the development of new architectures like transformers, as well as large-scale pretraining methods. We\u2019ve extended fairseq, a framework for sequence-to-sequence applications such as language translation, to include support for end-to-end learning for speech and audio recognition tasks.These", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "extensions to fairseq enable faster exploration and prototyping of new speech research ideas while offering a clear path to production.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "Get started with fairseq [here](https://github.com/pytorch/fairseq/tree/master/examples/speech_recognition).\n\n# Cloud provider and hardware ecosystem support\n\nCloud providers such as Amazon Web Services, Microsoft Azure, and Google Cloud provide extensive support for anyone looking to develop ML on PyTorch and deploy in production. We\u2019re excited to share the general availability of Google Cloud TPU support and a newly launched integration with Alibaba Cloud. We\u2019re also expanding hardware ecosystem support.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "* Google Cloud TPU support now broadly available. To accelerate the largest-scale machine learning (ML) applications deployed today and enable rapid development of the ML applications of tomorrow, Google created custom silicon chips called Tensor Processing Units ([TPUs](https://cloud.google.com/tpu/)). When assembled into multi-rack ML supercomputers called [Cloud TPU Pods](https://cloud.google.com/blog/products/ai-machine-learning/cloud-tpu-pods-break-ai-training-records), these TPUs can complete ML", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "workloads in minutes or hours that previously took days or weeks on other systems. Engineers from Facebook, Google, and Salesforce worked together to enable and pilot Cloud TPU support in PyTorch, including experimental support for Cloud TPU Pods. PyTorch support for Cloud TPUs is also available in Colab. Learn more about how to get started with PyTorch on Cloud TPUs [here](https://github.com/pytorch/xla).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "* Alibaba adds support for PyTorch in Alibaba Cloud. The initial integration involves a one-click solution for PyTorch 1.x, Data Science Workshop notebook service, distributed training with Gloo/NCCL, as well as seamless integration with Alibaba IaaS such as OSS, ODPS, and NAS. Together with the toolchain provided by Alibaba, we look forward to significantly reducing the overhead necessary for adoption, as well as helping Alibaba Cloud\u2019s global customer base leverage PyTorch to develop new AI applications.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "* ML hardware ecosystem expands. In addition to key GPU and CPU partners, the PyTorch ecosystem has also enabled support for dedicated ML accelerators. Updates from [Intel](https://www.intel.ai/nnpi-glow-pytorch/) and [Habana](https://medium.com/@HabanaLabs/unlocking-ai-scaling-through-software-and-hardware-interface-standardization-77561cb7598b) showcase how PyTorch, connected to the Glow optimizing compiler, enables developers to utilize these market-specific solutions.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
+{"page_content": "Additionally, we\u2019ve collaborated with Google and Salesforce to add broad support for Cloud Tensor Processing Units, providing a significantly accelerated option for training large-scale deep neural networks. [Alibaba Cloud](https://data.aliyun.com/bigdata/pai-pytorch?spm=5176.12825654.a9ylfrljh.d112.7b652c4ayuOO4M&scm=20140722.1068.1.1098&aly_as=-PvJ5e4c) also joins Amazon Web Services, Microsoft Azure, and Google Cloud as supported cloud platforms for PyTorch users. You can get started now at [pytorch.org](https://pytorch.org/get-started/locally/).\n\n# PyTorch 1.3\n\nThe 1.3 release of PyTorch brings significant new features, including experimental support for mobile device deployment, eager mode quantization at 8-bit integer, and the ability to name tensors. With each of these enhancements, we look forward to additional contributions and improvements from the PyTorch community.\n\n## Named tensors (experimental)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
+{"page_content": "Cornell University\u2019s [Sasha Rush has argued](http://nlp.seas.harvard.edu/NamedTensor) that, despite its ubiquity in deep learning, the traditional implementation of tensors has significant shortcomings, such as exposing private dimensions, broadcasting based on absolute position, and keeping type information in documentation. He proposed named tensors as an alternative approach.\n\nToday, we name and access dimensions by comment:\n\n```python\n# Tensor[N, C, H, W]\n images = torch.randn(32, 3, 56, 56)\n images.sum(dim=1)\n images.select(dim=1, index=0)\n```\n\nBut naming explicitly leads to more readable and maintainable code:\n\n```python\nNCHW = [\u2018N\u2019, \u2018C\u2019, \u2018H\u2019, \u2018W\u2019]\n images = torch.randn(32, 3, 56, 56, names=NCHW)\n images.sum('C')\n images.select('C', index=0)\n```\n\n## Quantization (experimental)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
+{"page_content": "It\u2019s important to make efficient use of both server-side and on-device compute resources when developing ML applications. To support more efficient deployment on servers and edge devices, PyTorch 1.3 now supports 8-bit model quantization using the familiar eager mode Python API. Quantization refers to techniques used to perform computation and storage at reduced precision, such as 8-bit integer. This currently experimental feature includes support for post-training quantization, dynamic quantization, and quantization-aware training. It leverages the [FBGEMM](https://github.com/pytorch/FBGEMM) and [QNNPACK](https://github.com/pytorch/QNNPACK) state-of-the-art quantized kernel back ends, for x86 and ARM CPUs, respectively, which are integrated with PyTorch and now share a common API.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
+{"page_content": "To learn more about the design and architecture, check out the API docs [here](https://pytorch.org/docs/master/quantization.html), and get started with any of the supported techniques using the tutorials available [here](https://pytorch.org/tutorials/).\n\n## PyTorch mobile (experimental)\n\nRunning ML on edge devices is growing in importance as applications continue to demand lower latency. It is also a foundational element for privacy-preserving techniques such as federated learning. To enable more efficient on-device ML, PyTorch 1.3 now supports an end-to-end workflow from Python to deployment on iOS and Android.\n\nThis is an early, experimental release, optimized for end-to-end development. Coming releases will focus on:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
+{"page_content": "* Optimization for size: Build level optimization and selective compilation depending on the operators needed for user applications (i.e., you pay binary size for only the operators you need)\n* Performance: Further improvements to performance and coverage on mobile CPUs and GPUs\n* High level API: Extend mobile native APIs to cover common preprocessing and integration tasks needed for incorporating ML in mobile applications. e.g. Computer vision and NLP\n\nLearn more or get started on Android or iOS [here](http://pytorch.org/mobile).\n\n# New tools for model interpretability and privacy\n\n## Captum", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
+{"page_content": "## Captum\n\nAs models become ever more complex, it is increasingly important to develop new methods for model interpretability. To help address this need, we\u2019re launching Captum, a tool to help developers working in PyTorch understand why their model generates a specific output. Captum provides state-of-the-art tools to understand how the importance of specific neurons and layers and affect predictions made by the models. Captum\u2019s algorithms include integrated gradients, conductance, SmoothGrad and VarGrad, and DeepLift.\n\nThe example below shows how to apply model interpretability algorithms on a pretrained ResNet model and then visualize the attributions for each pixel by overlaying them on the image.\n\n```python\nnoise_tunnel = NoiseTunnel(integrated_gradients)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
+{"page_content": "attributions_ig_nt, delta = noise_tunnel.attribute(input, n_samples=10, nt_type='smoothgrad_sq', target=pred_label_idx)\n_ = viz.visualize_image_attr_multiple([\"original_image\", \"heat_map\"],\n [\"all\", \"positive\"],\n np.transpose(attributions_ig_nt.squeeze().cpu().detach().numpy(), (1,2,0)),\n np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)),\n cmap=default_cmap,\n show_colorbar=True)\n```\n\n\n

\n
\n\n

\n
\n\nLearn more about Captum at [captum.ai](https://www.captum.ai/).\n\n## CrypTen", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
+{"page_content": "## CrypTen\n\nPractical applications of ML via cloud-based or machine-learning-as-a-service (MLaaS) platforms pose a range of security and privacy challenges. In particular, users of these platforms may not want or be able to share unencrypted data, which prevents them from taking full advantage of ML tools. To address these challenges, the ML community is exploring a number of technical approaches, at various levels of maturity. These include homomorphic encryption, secure multiparty computation, trusted execution environments, on-device computation, and differential privacy.\n\nTo provide a better understanding of how some of these technologies can be applied, we are releasing CrypTen, a new community-based research platform for taking the field of privacy-preserving ML forward. Learn more about CrypTen [here](https://ai.facebook.com/blog/crypten-a-new-research-tool-for-secure-machine-learning-with-pytorch). It is available on GitHub [here](https://github.com/facebookresearch/CrypTen).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
+{"page_content": "# Tools for multimodal AI systems\n\nDigital content is often made up of several modalities, such as text, images, audio, and video. For example, a single public post might contain an image, body text, a title, a video, and a landing page. Even one particular component may have more than one modality, such as a video that contains both visual and audio signals, or a landing page that is composed of images, text, and HTML sources.\n\nThe ecosystem of tools and libraries that work with PyTorch offer enhanced ways to address the challenges of building multimodal ML systems. Here are some of the latest libraries launching today:\n\n## Detectron2", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
+{"page_content": "## Detectron2\n\nObject detection and segmentation are used for tasks ranging from autonomous vehicles to content understanding for platform integrity. To advance this work, Facebook AI Research (FAIR) is releasing Detectron2, an object detection library now implemented in PyTorch. Detectron2 provides support for the latest models and tasks, increased flexibility to aid computer vision research, and improvements in maintainability and scalability to support production use cases.\n\nDetectron2 is available [here](https://github.com/facebookresearch/detectron2) and you can learn more [here](https://ai.facebook.com/blog/-detectron2-a-pytorch-based-modular-object-detection-library-).\n\n## Speech extensions to fairseq", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
+{"page_content": "Language translation and audio processing are critical components in systems and applications such as search, translation, speech, and assistants. There has been tremendous progress in these fields recently thanks to the development of new architectures like transformers, as well as large-scale pretraining methods. We\u2019ve extended fairseq, a framework for sequence-to-sequence applications such as language translation, to include support for end-to-end learning for speech and audio recognition tasks.These extensions to fairseq enable faster exploration and prototyping of new speech research ideas while offering a clear path to production.\n\nGet started with fairseq [here](https://github.com/pytorch/fairseq/tree/master/examples/speech_recognition).\n\n# Cloud provider and hardware ecosystem support", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
+{"page_content": "Cloud providers such as Amazon Web Services, Microsoft Azure, and Google Cloud provide extensive support for anyone looking to develop ML on PyTorch and deploy in production. We\u2019re excited to share the general availability of Google Cloud TPU support and a newly launched integration with Alibaba Cloud. We\u2019re also expanding hardware ecosystem support.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
+{"page_content": "* Google Cloud TPU support now broadly available. To accelerate the largest-scale machine learning (ML) applications deployed today and enable rapid development of the ML applications of tomorrow, Google created custom silicon chips called Tensor Processing Units ([TPUs](https://cloud.google.com/tpu/)). When assembled into multi-rack ML supercomputers called [Cloud TPU Pods](https://cloud.google.com/blog/products/ai-machine-learning/cloud-tpu-pods-break-ai-training-records), these TPUs can complete ML workloads in minutes or hours that previously took days or weeks on other systems. Engineers from Facebook, Google, and Salesforce worked together to enable and pilot Cloud TPU support in PyTorch, including experimental support for Cloud TPU Pods. PyTorch support for Cloud TPUs is also available in Colab. Learn more about how to get started with PyTorch on Cloud TPUs [here](https://github.com/pytorch/xla).\n* Alibaba adds support for PyTorch in Alibaba Cloud. The initial integration involves a one-click solution for PyTorch 1.x, Data Science Workshop notebook service, distributed training with Gloo/NCCL, as well as seamless integration with Alibaba IaaS such as OSS, ODPS, and NAS. Together with the toolchain provided by Alibaba, we look forward to significantly reducing the overhead necessary for adoption, as well as helping Alibaba Cloud\u2019s global customer base leverage PyTorch to develop new AI applications.\n* ML hardware ecosystem expands. In addition to key GPU and CPU partners, the PyTorch ecosystem has also enabled support for dedicated ML accelerators. Updates from [Intel](https://www.intel.ai/nnpi-glow-pytorch/) and [Habana](https://medium.com/@HabanaLabs/unlocking-ai-scaling-through-software-and-hardware-interface-standardization-77561cb7598b) showcase how PyTorch, connected to the Glow optimizing compiler, enables developers to utilize these market-specific solutions.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
{"page_content": "# Growth in the PyTorch community\n\nAs an open source, community-driven project, PyTorch benefits from wide range of contributors bringing new capabilities to the ecosystem. Here are some recent examples:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "* Mila SpeechBrain aims to provide an open source, all-in-one speech toolkit based on PyTorch. The goal is to develop a single, flexible, user-friendly toolkit that can be used to easily develop state-of-the-art systems for speech recognition (both end to end and HMM-DNN), speaker recognition, speech separation, multi-microphone signal processing (e.g., beamforming), self-supervised learning, and many others. [Learn more](https://speechbrain.github.io/)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "* SpaCy is a new wrapping library with consistent and easy-to-use interfaces to several models, in order to extract features to power NLP pipelines. Support is provided for via spaCy\u2019s standard training API. The library also calculates an alignment so the transformer features can be related back to actual words instead of just wordpieces. [Learn more](https://explosion.ai/blog/spacy-pytorch-transformers)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "* HuggingFace PyTorch-Transformers (formerly known as pytorch-pretrained-bert is a library of state-of-the-art pretrained models for Natural Language Processing (NLP). The library currently contains PyTorch implementations, pretrained model weights, usage scripts, and conversion utilities for models such as BERT, GPT-2, RoBERTa, and DistilBERT. It has also grown quickly, with more than 13,000 GitHub stars and a broad set of users. [Learn more](https://github.com/huggingface/transformers)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "* PyTorch Lightning is a Keras-like ML library for PyTorch. It leaves core training and validation logic to you and automates the rest. Reproducibility is a crucial requirement for many fields of research, including those based on ML techniques. As the number of research papers submitted to arXiv and conferences skyrockets into the tens of thousands, scaling reproducibility becomes difficult. [Learn more](https://github.com/williamFalcon/pytorch-lightning).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
+{"page_content": "* Mila SpeechBrain aims to provide an open source, all-in-one speech toolkit based on PyTorch. The goal is to develop a single, flexible, user-friendly toolkit that can be used to easily develop state-of-the-art systems for speech recognition (both end to end and HMM-DNN), speaker recognition, speech separation, multi-microphone signal processing (e.g., beamforming), self-supervised learning, and many others. [Learn more](https://speechbrain.github.io/)\n* SpaCy is a new wrapping library with consistent and easy-to-use interfaces to several models, in order to extract features to power NLP pipelines. Support is provided for via spaCy\u2019s standard training API. The library also calculates an alignment so the transformer features can be related back to actual words instead of just wordpieces. [Learn more](https://explosion.ai/blog/spacy-pytorch-transformers)\n* HuggingFace PyTorch-Transformers (formerly known as pytorch-pretrained-bert is a library of state-of-the-art pretrained models for Natural Language Processing (NLP). The library currently contains PyTorch implementations, pretrained model weights, usage scripts, and conversion utilities for models such as BERT, GPT-2, RoBERTa, and DistilBERT. It has also grown quickly, with more than 13,000 GitHub stars and a broad set of users. [Learn more](https://github.com/huggingface/transformers)\n* PyTorch Lightning is a Keras-like ML library for PyTorch. It leaves core training and validation logic to you and automates the rest. Reproducibility is a crucial requirement for many fields of research, including those based on ML techniques. As the number of research papers submitted to arXiv and conferences skyrockets into the tens of thousands, scaling reproducibility becomes difficult. [Learn more](https://github.com/williamFalcon/pytorch-lightning).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
{"page_content": "We recently held the first online Global PyTorch Summer Hackathon, where researchers and developers around the world were invited to build innovative new projects with PyTorch. Nearly 1,500 developers participated, submitting projects ranging from livestock disease detection to AI-powered financial assistants. The winning projects were:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "* Torchmeta, which provides extensions for PyTorch to simplify the development of meta-learning algorithms in PyTorch. It features a unified interface inspired by TorchVision for both few-shot classification and regression problems, to allow easy benchmarking on multiple data sets to aid with reproducibility.\n* Open-Unmix, a system for end-to-end music demixing with PyTorch. Demixing separates the individual instruments or vocal track from any stereo recording.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "* Endless AI-Generated Tees, a store featuring AI-generated T-shirt designs that can be purchased and delivered worldwide. The system uses a state-of-the-art generative model (StyleGAN) that was built with PyTorch and then trained on modern art.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
-{"page_content": "Visit [pytorch.org](https://pytorch.org/) to learn more and get started with PyTorch 1.3 and the latest libraries and ecosystem projects. We look forward to the contributions, exciting research advancements, and real-world applications that the community builds with PyTorch.\n\n*We\u2019d like to thank the entire PyTorch team and the community for all their contributions to this work.*", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
+{"page_content": "* Torchmeta, which provides extensions for PyTorch to simplify the development of meta-learning algorithms in PyTorch. It features a unified interface inspired by TorchVision for both few-shot classification and regression problems, to allow easy benchmarking on multiple data sets to aid with reproducibility.\n* Open-Unmix, a system for end-to-end music demixing with PyTorch. Demixing separates the individual instruments or vocal track from any stereo recording.\n* Endless AI-Generated Tees, a store featuring AI-generated T-shirt designs that can be purchased and delivered worldwide. The system uses a state-of-the-art generative model (StyleGAN) that was built with PyTorch and then trained on modern art.\n\nVisit [pytorch.org](https://pytorch.org/) to learn more and get started with PyTorch 1.3 and the latest libraries and ecosystem projects. We look forward to the contributions, exciting research advancements, and real-world applications that the community builds with PyTorch.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
+{"page_content": "*We\u2019d like to thank the entire PyTorch team and the community for all their contributions to this work.*", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/", "category": "pytorch blogs"}}
{"page_content": "---\nlayout: blog_detail\ntitle: \"New library updates in PyTorch 1.12\"\nauthor: Team PyTorch\nfeatured-img: ''\n---\n\nWe are bringing a number of improvements to the current PyTorch libraries, alongside the [PyTorch 1.12 release](https://github.com/pytorch/pytorch/releases/tag/v1.12.0). These updates demonstrate our focus on developing common and extensible APIs across all domains to make it easier for our community to build ecosystem projects on PyTorch.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Summary:\n- **TorchVision** - Added multi-weight support API, new architectures, model variants, and pretrained weight. See the release notes [here](https://github.com/pytorch/vision/releases).\n- **TorchAudio** - Introduced beta features including a streaming API, a CTC beam search decoder, and new beamforming modules and methods. See the release notes [here](https://github.com/pytorch/audio/releases).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- **TorchText** - Extended support for scriptable BERT tokenizer and added datasets for GLUE benchmark. See the release notes [here](https://github.com/pytorch/text/releases).\n- **TorchRec** - Added EmbeddingModule benchmarks, examples for TwoTower Retrieval, inference and sequential embeddings, metrics, improved planner and demonstrated integration with production components. See the release notes [here](https://github.com/pytorch/torchrec/releases).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- **TorchX** - Launch PyTorch trainers developed on local workspaces onto five different types of schedulers. See the release notes [here](https://github.com/pytorch/torchx/blob/main/CHANGELOG.md?plain=1#L3).\n- **FBGemm** - Added and improved kernels for Recommendation Systems inference workloads, including table batched embedding bag, jagged tensor operations, and other special-case optimizations.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "TorchVision v0.13", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Multi-weight support API\n\nTorchVision v0.13 offers a new [Multi-weight support API](https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/) for loading different weights to the existing model builder methods:\n\n```python\nfrom torchvision.models import *\n\n# Old weights with accuracy 76.130%\nresnet50(weights=ResNet50_Weights.IMAGENET1K_V1)\n\n# New weights with accuracy 80.858%\nresnet50(weights=ResNet50_Weights.IMAGENET1K_V2)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "# Best available weights (currently alias for IMAGENET1K_V2)\n# Note that these weights may change across versions\nresnet50(weights=ResNet50_Weights.DEFAULT)\n\n# Strings are also supported\nresnet50(weights=\"IMAGENET1K_V2\")\n\n# No weights - random initialization\nresnet50(weights=None)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "The new API bundles along with the weights important details such as the preprocessing transforms and meta-data such as labels. Here is how to make the most out of it:\n\n```python\nfrom torchvision.io import read_image\nfrom torchvision.models import resnet50, ResNet50_Weights\n\nimg = read_image(\"test/assets/encode_jpeg/grace_hopper_517x606.jpg\")\n\n# Step 1: Initialize model with the best available weights\nweights = ResNet50_Weights.DEFAULT\nmodel = resnet50(weights=weights)\nmodel.eval()", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "# Step 2: Initialize the inference transforms\npreprocess = weights.transforms()\n\n# Step 3: Apply inference preprocessing transforms\nbatch = preprocess(img).unsqueeze(0)\n\n# Step 4: Use the model and print the predicted category\nprediction = model(batch).squeeze(0).softmax(0)\nclass_id = prediction.argmax().item()\nscore = prediction[class_id].item()\ncategory_name = weights.meta[\"categories\"][class_id]\nprint(f\"{category_name}: {100 * score:.1f}%\")", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "You can read more about the new API in the [docs](https://pytorch.org/vision/0.13/models.html). To provide your feedback, please use this dedicated [Github issue](https://github.com/pytorch/vision/issues/5088).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "New architectures and model variants", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Classification\n\nThe [Swin Transformer](https://arxiv.org/abs/2103.14030) and [EfficienetNetV2](https://arxiv.org/abs/2104.00298) are two popular classification models which are often used for downstream vision tasks. This release includes 6 pre-trained weights for their classification variants. Here is how to use the new models:\n\n```python\nimport torch\nfrom torchvision.models import *\n\nimage = torch.rand(1, 3, 224, 224)\nmodel = swin_t(weights=\"DEFAULT\").eval()\nprediction = model(image)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "image = torch.rand(1, 3, 384, 384)\nmodel = efficientnet_v2_s(weights=\"DEFAULT\").eval()\nprediction = model(image)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "In addition to the above, we also provide new variants for existing architectures such as ShuffleNetV2, ResNeXt and MNASNet. The accuracies of all the new pre-trained models obtained on ImageNet-1K are seen below:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "| **Model** | **Acc@1** | **Acc@5** |\n|--------------------------------|-----------|-----------|\n| swin_t | 81.474 | 95.776 |\n| swin_s | 83.196 | 96.36 |\n| swin_b | 83.582 | 96.64 |\n| efficientnet_v2_s | 84.228 | 96.878 |\n| efficientnet_v2_m | 85.112 | 97.156 |\n| efficientnet_v2_l | 85.808 | 97.788 |", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "| resnext101_64x4d | 83.246 | 96.454 |\n| resnext101_64x4d (quantized) | 82.898 | 96.326 |\n| shufflenet_v2_x1_5 | 72.996 | 91.086 |\n| shufflenet_v2_x1_5 (quantized) | 72.052 | 0.700 |\n| shufflenet_v2_x2_0 | 76.230 | 93.006 |\n| shufflenet_v2_x2_0 (quantized) | 75.354 | 92.488 |\n| mnasnet0_75 | 71.180 | 90.496 |\n| mnas1_3 | 76.506 | 93.522 |", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "We would like to thank Hu Ye for contributing to TorchVision the Swin Transformer implementation.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "(BETA) Object Detection and Instance Segmentation\n\nWe have introduced 3 new model variants for RetinaNet, FasterRCNN and MaskRCNN that include several [post-paper architectural optimizations](https://github.com/pytorch/vision/pull/5444) and improved training recipes. All models can be used similarly:\n\n```python\nimport torch\nfrom torchvision.models.detection import *", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "images = [torch.rand(3, 800, 600)]\nmodel = retinanet_resnet50_fpn_v2(weights=\"DEFAULT\")\n# model = fasterrcnn_resnet50_fpn_v2(weights=\"DEFAULT\")\n# model = maskrcnn_resnet50_fpn_v2(weights=\"DEFAULT\")\nmodel.eval()\nprediction = model(images)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Below we present the metrics of the new variants on COCO val2017. In parenthesis we denote the improvement over the old variants:\n\n| **Model** | **Box mAP** | **Mask mAP** |\n|----------------------------|-------------|--------------|\n| retinanet_resnet50_fpn_v2 | 41.5 (+5.1) | - |\n| fasterrcnn_resnet50_fpn_v2 | 46.7 (+9.7) | - |\n| maskrcnn_resnet50_fpn_v2 | 47.4 (+9.5) | 41.8 (+7.2) |", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "We would like to thank Ross Girshick, Piotr Dollar, Vaibhav Aggarwal, Francisco Massa and Hu Ye for their past research and contributions to this work.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "New pre-trained weights", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "SWAG weights", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "The ViT and RegNet model variants offer new pre-trained [SWAG](https://arxiv.org/abs/2201.08371) (\u200b\u200bSupervised Weakly from hashtAGs) weights. One of the biggest of these models achieves a whopping 88.6% accuracy on ImageNet-1K. We currently offer two versions of the weights: 1) fine-tuned end-to-end weights on ImageNet-1K (highest accuracy) and 2) frozen trunk weights with a linear classifier fit on ImageNet-1K (great for transfer learning). Below we see the detailed accuracies of each model variant:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "| **Model Weights** | **Acc@1** | **Acc@5** |\n|--------------------------------------------------|-----------|-----------|\n| RegNet_Y_16GF_Weights.IMAGENET1K_SWAG_E2E_V1 | 86.012 | 98.054 |\n| RegNet_Y_16GF_Weights.IMAGENET1K_SWAG_LINEAR_V1 | 83.976 | 97.244 |\n| RegNet_Y_32GF_Weights.IMAGENET1K_SWAG_E2E_V1 | 86.838 | 98.362 |\n| RegNet_Y_32GF_Weights.IMAGENET1K_SWAG_LINEAR_V1 | 84.622 | 97.48 |", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "| RegNet_Y_128GF_Weights.IMAGENET1K_SWAG_E2E_V1 | 88.228 | 98.682 |\n| RegNet_Y_128GF_Weights.IMAGENET1K_SWAG_LINEAR_V1 | 86.068 | 97.844 |\n| ViT_B_16_Weights.IMAGENET1K_SWAG_E2E_V1 | 85.304 | 97.65 |\n| ViT_B_16_Weights.IMAGENET1K_SWAG_LINEAR_V1 | 81.886 | 96.18 |\n| ViT_L_16_Weights.IMAGENET1K_SWAG_E2E_V1 | 88.064 | 98.512 |\n| ViT_L_16_Weights.IMAGENET1K_SWAG_LINEAR_V1 | 85.146 | 97.422 |", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "| ViT_H_14_Weights.IMAGENET1K_SWAG_E2E_V1 | 88.552 | 98.694 |\n| ViT_H_14_Weights.IMAGENET1K_SWAG_LINEAR_V1 | 85.708 | 97.73 |", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "The SWAG weights are released under the [Attribution-NonCommercial 4.0 International](https://github.com/facebookresearch/SWAG/blob/main/LICENSE) license. We would like to thank Laura Gustafson, Mannat Singh and Aaron Adcock for their work and support in making the weights available to TorchVision.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Model Refresh\n\nThe release of the Multi-weight support API enabled us to refresh the most popular models and offer more accurate weights. We improved on average each model by ~3 points. The new recipe used was learned on top of ResNet50 and its details were covered on a [previous blog post](https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "| **Model** | **Old weights** | **New weights** |\n|------------------------------|-----------------|-----------------|\n| efficientnet_b1 | 78.642 | 79.838 |\n| mobilenet_v2 | 71.878 | 72.154 |\n| mobilenet_v3_large | 74.042 | 75.274 |\n| regnet_y_400mf | 74.046 | 75.804 |\n| regnet_y_800mf | 76.42 | 78.828 |", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "| regnet_y_1_6gf | 77.95 | 80.876 |\n| regnet_y_3_2gf | 78.948 | 81.982 |\n| regnet_y_8gf | 80.032 | 82.828 |\n| regnet_y_16gf | 80.424 | 82.886 |\n| regnet_y_32gf | 80.878 | 83.368 |\n| regnet_x_400mf | 72.834 | 74.864 |\n| regnet_x_800mf | 75.212 | 77.522 |", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "| regnet_x_1_6gf | 77.04 | 79.668 |\n| regnet_x_3_2gf | 78.364 | 81.196 |\n| regnet_x_8gf | 79.344 | 81.682 |\n| regnet_x_16gf | 80.058 | 82.716 |\n| regnet_x_32gf | 80.622 | 83.014 |\n| resnet50 | 76.13 | 80.858 |\n| resnet50 (quantized) | 75.92 | 80.282 |", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "| resnet101 | 77.374 | 81.886 |\n| resnet152 | 78.312 | 82.284 |\n| resnext50_32x4d | 77.618 | 81.198 |\n| resnext101_32x8d | 79.312 | 82.834 |\n| resnext101_32x8d (quantized) | 78.986 | 82.574 |\n| wide_resnet50_2 | 78.468 | 81.602 |\n| wide_resnet101_2 | 78.848 | 82.51 |", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "We would like to thank Piotr Dollar, Mannat Singh and Hugo Touvron for their past research and contributions to this work.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "New Augmentations, Layers and Losses", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "This release brings a bunch of new primitives which can be used to produce SOTA models. Some highlights include the addition of [AugMix](https://arxiv.org/abs/1912.02781) data-augmentation method, the [DropBlock](https://arxiv.org/abs/1810.12890) layer, the [cIoU/dIoU](https://arxiv.org/abs/1911.08287) loss and [many more](https://github.com/pytorch/vision/issues/5410). We would like to thank Aditya Oke, Abhijit Deo, Yassine Alouini and Hu Ye for contributing to the project and for helping us maintain", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "TorchVision relevant and fresh.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Documentation", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "We completely revamped our models documentation to make them easier to browse, and added various key information such as supported image sizes, or image pre-processing steps of pre-trained weights. We now have a [main model page](https://pytorch.org/vision/main/models.html) with various [summary tables](https://pytorch.org/vision/main/models.html#table-of-all-available-classification-weights) of available weights, and each model has a [dedicated page](https://pytorch.org/vision/main/models/resnet.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Each model builder is also documented in their [own page](https://pytorch.org/vision/main/models/generated/torchvision.models.resnet50.html#torchvision.models.resnet50), with more details about the available weights, including accuracy, minimal image size, link to training recipes, and other valuable info. For comparison, our previous models docs are [here](https://pytorch.org/vision/0.12/models.html). To provide feedback on the new documentation, please use the dedicated [Github", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "issue](https://github.com/pytorch/vision/issues/5511).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "TorchAudio v0.12", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "(BETA) Streaming API\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "StreamReader is TorchAudio\u2019s new I/O API. It is backed by FFmpeg\u2020, and allows users to:\n- Decode audio and video formats, including MP4 and AAC\n- Handle input forms, such as local files, network protocols, microphones, webcams, screen captures and file-like objects\n- Iterate over and decode chunk-by-chunk, while changing the sample rate or frame rate\n- Apply audio and video filters, such as low-pass filter and image scaling\n- Decode video with Nvidia's hardware-based decoder (NVDEC)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "For usage details, please check out the [documentation](https://pytorch.org/audio/0.12.0/io.html#streamreader) and tutorials:\n- [Media Stream API - Pt.1](https://pytorch.org/audio/0.12.0/tutorials/streaming_api_tutorial.html)\n- [Media Stream API - Pt.2](https://pytorch.org/audio/0.12.0/tutorials/streaming_api2_tutorial.html)\n- [Online ASR with Emformer RNN-T](https://pytorch.org/audio/0.12.0/tutorials/online_asr_tutorial.html)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- [Device ASR with Emformer RNN-T](https://pytorch.org/audio/0.12.0/tutorials/device_asr.html)\n- [Accelerated Video Decoding with NVDEC](https://pytorch.org/audio/0.12.0/hw_acceleration_tutorial.html)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "\u2020 To use StreamReader, FFmpeg libraries are required. Please install FFmpeg. The coverage of codecs depends on how these libraries are configured. TorchAudio official binaries are compiled to work with FFmpeg 4 libraries; FFmpeg 5 can be used if TorchAudio is built from source.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "(BETA) CTC Beam Search Decoder\n\nTorchAudio integrates the wav2letter CTC beam search decoder from [Flashlight](https://arxiv.org/pdf/2201.12465.pdf) ([GitHub](https://github.com/flashlight/flashlight)). The addition of this inference time decoder enables running end-to-end CTC ASR evaluation using TorchAudio utils.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Customizable lexicon and lexicon-free decoders are supported, and both are compatible with KenLM n-gram language models or without using a language model. TorchAudio additionally supports downloading token, lexicon, and pretrained KenLM files for the LibriSpeech dataset.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "For usage details, please check out the [documentation](https://pytorch.org/audio/0.12.0/models.decoder.html#ctcdecoder) and [ASR inference tutorial](https://pytorch.org/audio/0.12.0/tutorials/asr_inference_with_ctc_decoder_tutorial.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "(BETA) New Beamforming Modules and Methods", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "To improve flexibility in usage, the release adds two new beamforming modules under torchaudio.transforms: [SoudenMVDR](https://pytorch.org/audio/0.12.0/transforms.html#soudenmvdr) and [RTFMVDR](https://pytorch.org/audio/0.12.0/transforms.html#rtfmvdr). The main differences from [MVDR](https://pytorch.org/audio/0.11.0/transforms.html#mvdr) are:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- Use power spectral density (PSD) and relative transfer function (RTF) matrices as inputs instead of time-frequency masks. The module can be integrated with neural networks that directly predict complex-valued STFT coefficients of speech and noise\n- Add \\'reference_channel\\' as an input argument in the forward method, to allow users to select the reference channel in model training or dynamically change the reference channel in inference", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Besides the two modules, new function-level beamforming methods are added under torchaudio.functional. These include:\n- [psd](https://pytorch.org/audio/0.12.0/functional.html#psd)\n- [mvdr_weights_souden](https://pytorch.org/audio/0.12.0/functional.html#mvdr-weights-souden)\n- [mvdr_weights_rtf](https://pytorch.org/audio/0.12.0/functional.html#mvdr-weights-rtf)\n- [rtf_evd](https://pytorch.org/audio/0.12.0/functional.html#rtf-evd)\n- [rtf_power](https://pytorch.org/audio/0.12.0/functional.html#rtf-power)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- [apply_beamforming](https://pytorch.org/audio/0.12.0/functional.html#apply-beamforming)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "For usage details, please check out the documentation at [torchaudio.transforms](https://pytorch.org/audio/0.12.0/transforms.html#multi-channel) and [torchaudio.functional](https://pytorch.org/audio/0.12.0/functional.html#multi-channel) and the [Speech Enhancement with MVDR Beamforming tutorial](https://pytorch.org/audio/0.12.0/tutorials/mvdr_tutorial.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "TorchText v0.13", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Glue Datasets", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "We increased the number of datasets in TorchText from 22 to 30 by adding the remaining 8 datasets from the GLUE benchmark (SST-2 was already supported). The complete list of GLUE datasets is as follows:\n- [CoLA](https://nyu-mll.github.io/CoLA/) ([paper](https://arxiv.org/pdf/1805.12471.pdf)): Single sentence binary classification acceptability task", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- [SST-2](https://nlp.stanford.edu/sentiment/) ([paper](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf)): Single sentence binary classification sentiment task\n- [MRPC](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf) ([paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/I05-50025B15D.pdf)): Dual sentence binary classification paraphrase task\n- [QQP](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs): Dual sentence binary classification paraphrase task", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- [STS-B](https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark) ([paper](https://aclanthology.org/S17-2001.pdf)): Single sentence to float regression sentence similarity task\n- [MNLI](https://cims.nyu.edu/~sbowman/multinli/) ([paper](https://cims.nyu.edu/~sbowman/multinli/paper.pdf)): Sentence ternary classification NLI task\n- [QNLI](https://gluebenchmark.com/) ([paper](https://arxiv.org/pdf/1804.07461.pdf)): Sentence binary classification QA and NLI tasks", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- [RTE](https://aclweb.org/aclwiki/Recognizing_Textual_Entailment) ([paper](https://arxiv.org/pdf/2010.03061.pdf)): Dual sentence binary classification NLI task\n- [WNLI](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html) ([paper](http://commonsensereasoning.org/2011/papers/Levesque.pdf)): Dual sentence binary classification coreference and NLI tasks", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Scriptable BERT Tokenizer\n\nTorchText has extended support for scriptable tokenizer by adding the WordPiece tokenizer used in BERT. It is one of the commonly used algorithms for splitting input text into sub-words units and was introduced in [Japanese and Korean Voice Search (Schuster et al., 2012)](https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "TorchScriptabilty support would allow users to embed the BERT text-preprocessing natively in C++ without needing the support of python runtime. As TorchText now supports the CMAKE build system to natively link torchtext binaries with application code, users can easily integrate BERT tokenizers for deployment needs.\n\nFor usage details, please refer to the corresponding [documentation](https://pytorch.org/text/main/transforms.html#torchtext.transforms.BERTTokenizer).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "TorchRec v0.2.0\n\n### EmbeddingModule + DLRM benchmarks\n\nA set of [benchmarking tests](https://github.com/pytorch/torchrec/tree/main/benchmarks), showing performance characteristics of TorchRec\u2019s base modules and research models built out of TorchRec.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "TwoTower Retrieval Example, with FAISS\n\nWe provide an [example](https://github.com/pytorch/torchrec/tree/main/examples/retrieval) demonstrating training a distributed TwoTower (i.e. User-Item) Retrieval model that is sharded using TorchRec. The projected item embeddings are added to an IVFPQ FAISS index for candidate generation. The retrieval model and KNN lookup are bundled in a Pytorch model for efficient end-to-end retrieval.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Integrations", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "We demonstrate that TorchRec works out of the box with many components commonly used alongside PyTorch models in production like systems, such as \n- [Training](https://github.com/pytorch/torchrec/tree/main/examples/ray) a TorchRec model on Ray Clusters utilizing the Torchx Ray scheduler\n- [Preprocessing](https://github.com/pytorch/torchrec/tree/main/torchrec/datasets/scripts/nvt) and DataLoading with NVTabular on DLRM", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "- [Training](https://github.com/pytorch/torchrec/tree/main/examples/torcharrow) a TorchRec model with on-the-fly preprocessing with TorchArrow showcasing RecSys domain UDFs", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Sequential Embeddings Example: Bert4Rec\n\nWe provide an [example](https://github.com/pytorch/torchrec/tree/main/examples/bert4rec), using TorchRec, that reimplements the [BERT4REC](https://arxiv.org/abs/1904.06690) paper, showcasing EmbeddingCollection for non-pooled embeddings. Using DistributedModelParallel we see a 35% QPS gain over conventional data parallelism.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) Planner", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "The TorchRec library includes a built-in [planner](https://pytorch.org/torchrec/torchrec.distributed.planner.html) that selects near optimal sharding plan for a given model. The planner attempts to identify the best sharding plan by evaluating a series of proposals which are statically analyzed and fed into an integer partitioner. The planner is able to automatically adjust plans for a wide range of hardware setups, allowing users to scale performance seamlessly from local development environment to large", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "scale production hardware. See this [notebook](https://github.com/pytorch/torchrec/blob/main/torchrec/distributed/planner/Planner_Introduction.ipynb) for a more detailed tutorial.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) Inference", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "[TorchRec Inference](https://github.com/pytorch/torchrec/tree/main/torchrec/inference) is a C++ library that supports multi-gpu inference. The TorchRec library is used to shard models written and packaged in Python via torch.package (an alternative to TorchScript). The torch.deploy library is used to serve inference from C++ by launching multiple Python interpreters carrying the packaged model, thus subverting the GIL. Two models are provided as examples: [DLRM", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "multi-GPU](https://github.com/pytorch/torchrec/blob/main/examples/inference/dlrm_predict.py) (sharded via TorchRec) and [DLRM single-GPU](https://github.com/pytorch/torchrec/blob/main/examples/inference/dlrm_predict_single_gpu.py).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) RecMetrics", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "RecMetrics is a [metrics](https://github.com/pytorch/torchrec/tree/main/torchrec/metrics) library that collects common utilities and optimizations for Recommendation models. It extends [torchmetrics](https://torchmetrics.readthedocs.io/en/stable/).\n- A centralized metrics module that allows users to add new metrics\n- Commonly used metrics, including AUC, Calibration, CTR, MSE/RMSE, NE & Throughput\n- Optimization for metrics related operations to reduce the overhead of metric computation\n- Checkpointing", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "(Prototype) Single process Batched + Fused Embeddings", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Previously TorchRec\u2019s abstractions (EmbeddingBagCollection/EmbeddingCollection) over FBGEMM kernels, which provide benefits such as table batching, optimizer fusion, and UVM placement, could only be used in conjunction with DistributedModelParallel. We\u2019ve decoupled these notions from sharding, and introduced the [FusedEmbeddingBagCollection](https://github.com/pytorch/torchrec/blob/eb1247d8a2d16edc4952e5c2617e69acfe5477a5/torchrec/modules/fused_embedding_modules.py#L271), which can be used as a standalone", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "module, with all of the above features, and can also be sharded.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "TorchX v0.2.0\n\nTorchX is a job launcher that makes it easier to run PyTorch in distributed training clusters with many scheduler integrations including Kubernetes and Slurm. We're excited to release TorchX 0.2.0 with a number of improvements. TorchX is currently being used in production in both on-premise and cloud environments.\n\nCheck out the [quickstart](https://pytorch.org/torchx/main/quickstart.html) to start launching local and remote jobs.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Workspaces\n\nTorchX [now supports workspaces](https://pytorch.org/torchx/main/workspace.html) which allows users to easily launch training jobs using their local workspace. TorchX can automatically build a patch with your local training code on top of a base image to minimize iteration time and time to training.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": ".torchxconfig\n\nSpecifying options in [.torchxconfig](https://pytorch.org/torchx/latest/runner.config.html) saves you from having to type long CLI commands each time you launch a job. You can also define project level generic configs and drop a config file in your home directory for user-level overrides.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Expanded Scheduler Support\n\nTorchX now supports [AWS Batch](https://pytorch.org/torchx/main/schedulers/aws_batch.html) and [Ray (experimental)](https://pytorch.org/torchx/main/schedulers/ray.html) schedulers in addition to our existing integrations.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Distributed Training On All Schedulers\n\nThe TorchX dist.ddp component now works on all schedulers without any configuration. Distributed training workers will automatically discover each other when using [torchelastic](https://pytorch.org/docs/stable/distributed.elastic.html) via [the builtin dist.ddp component](https://pytorch.org/torchx/main/components/distributed.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Hyper Parameter Optimization\n\nTorchX [integrates with Ax](https://ax.dev/versions/latest/api/runners.html#module-ax.runners.torchx) to let you scale hyper-parameter optimizations (HPO) by launching the search trials onto remote clusters.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "File and Device Mounts\n\nTorchX now supports [remote filesystem mounts and custom devices](https://pytorch.org/torchx/main/specs.html#mounts). This enables your PyTorch jobs to efficiently access cloud storage such as NFS or Lustre. The device mounts enables usage of network accelerators like Infiniband and custom inference/training accelerators.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "FBGemm v0.2.0\n\nThe FBGEMM library contains optimized kernels meant to improve the performance of PyTorch workloads. We\u2019ve added a number of new features and optimizations over the last few months that we are excited to report.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Inference Table Batched Embedding (TBE)\n\nThe [table batched embedding bag](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops.py#L1541) (TBE) operator is an important base operation for embedding lookup for recommendation system inference on GPU. We added the following enhancements for performance and flexibility:\n\nAlignment restriction removed\n- Embedding dimension \\* data type size had to be multiple of 4B before and now, it is 1B.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Unified Virtual Memory (UVM) caching kernel optimizations\n- UVM caching kernels now scale linearly with # of tables using UVM caching. Previously, it was having similar overhead as all tables using UVM caching\n- UVM caching kernel overhead is much smaller than before", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Inference FP8 Table Batched Embedding (TBE)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "The [table batched embedding bag](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops.py#L1541) (TBE) previously supported FP32, FP16, INT8, INT4, and INT2 embedding weight types. While these weight types work well in many models, we integrate FP8 weight types (in both GPU and CPU operations) to allow for numerical and performance evaluations of FP8 in our models. Compared to INT8, FP8 does not require the additional bias and scale storage and calculations.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Additionally, the next generation of H100 GPUs has the FP8 support on Tensor Core (mainly matmul ops).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Jagged Tensor Kernels\n\nWe added optimized kernels to speed up [TorchRec JaggedTensor](https://pytorch.org/torchrec/torchrec.sparse.html). The purpose of JaggedTensor is to handle the case where one dimension of the input data is \u201cjagged\u201d, meaning that each consecutive row in a given dimension may be a different length, which is often the case with sparse feature inputs in recommendation systems. The internal representation is shown below:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "We added ops for [converting jagged tensors from sparse to dense formats](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/src/jagged_tensor_ops_cpu.cpp#L982) [and back](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/src/jagged_tensor_ops_cpu.cpp#L968), performing [matrix multiplications with jagged tensors](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/src/jagged_tensor_ops_cpu.cpp#L996), and [elementwise", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "ops](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/src/jagged_tensor_ops_cpu.cpp#L995).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Optimized permute102-baddbmm-permute102", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "It is difficult to fuse various matrix multiplications where the batch size is not the batch size of the model, switching the batch dimension is a quick solution. We created the [permute102_baddbmm_permute102](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/src/sparse_ops_cpu.cpp#L2401) operation that switches the first and the second dimension, performs the batched matrix multiplication and then switches back. Currently we only support forward pass with FP16 data type and will support FP32 type and", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "backward pass in the future.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Optimized index_select for dim 0 index selection", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "index_select is normally used as part of a sparse operation. While PyTorch supports a generic index_select for an arbitrary-dimension index selection, its performance for a special case like the dim 0 index selection is suboptimal. For this reason, we implement a [specialized index_select for dim 0](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/src/sparse_ops_cpu.cpp#L2421). In some cases, we have observed 1.4x performance gain from FBGEMM\u2019s index_select compared to the one from PyTorch (using", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "uniform index distribution).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "More about the implementation of influential instances can be found on our [GitHub](https://github.com/pytorch/captum/tree/master/captum/influence) page and [tutorials](https://captum.ai/tutorials/TracInCP_Tutorial).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "Summary:\n- **TorchVision** - Added multi-weight support API, new architectures, model variants, and pretrained weight. See the release notes [here](https://github.com/pytorch/vision/releases).\n- **TorchAudio** - Introduced beta features including a streaming API, a CTC beam search decoder, and new beamforming modules and methods. See the release notes [here](https://github.com/pytorch/audio/releases).\n- **TorchText** - Extended support for scriptable BERT tokenizer and added datasets for GLUE benchmark. See the release notes [here](https://github.com/pytorch/text/releases).\n- **TorchRec** - Added EmbeddingModule benchmarks, examples for TwoTower Retrieval, inference and sequential embeddings, metrics, improved planner and demonstrated integration with production components. See the release notes [here](https://github.com/pytorch/torchrec/releases).\n- **TorchX** - Launch PyTorch trainers developed on local workspaces onto five different types of schedulers. See the release notes [here](https://github.com/pytorch/torchx/blob/main/CHANGELOG.md?plain=1#L3).\n- **FBGemm** - Added and improved kernels for Recommendation Systems inference workloads, including table batched embedding bag, jagged tensor operations, and other special-case optimizations.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "## TorchVision v0.13\n\n### Multi-weight support API\n\nTorchVision v0.13 offers a new [Multi-weight support API](https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/) for loading different weights to the existing model builder methods:\n\n```python\nfrom torchvision.models import *\n\n# Old weights with accuracy 76.130%\nresnet50(weights=ResNet50_Weights.IMAGENET1K_V1)\n\n# New weights with accuracy 80.858%\nresnet50(weights=ResNet50_Weights.IMAGENET1K_V2)\n\n# Best available weights (currently alias for IMAGENET1K_V2)\n# Note that these weights may change across versions\nresnet50(weights=ResNet50_Weights.DEFAULT)\n\n# Strings are also supported\nresnet50(weights=\"IMAGENET1K_V2\")\n\n# No weights - random initialization\nresnet50(weights=None)\n```\n\nThe new API bundles along with the weights important details such as the preprocessing transforms and meta-data such as labels. Here is how to make the most out of it:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "```python\nfrom torchvision.io import read_image\nfrom torchvision.models import resnet50, ResNet50_Weights\n\nimg = read_image(\"test/assets/encode_jpeg/grace_hopper_517x606.jpg\")\n\n# Step 1: Initialize model with the best available weights\nweights = ResNet50_Weights.DEFAULT\nmodel = resnet50(weights=weights)\nmodel.eval()\n\n# Step 2: Initialize the inference transforms\npreprocess = weights.transforms()\n\n# Step 3: Apply inference preprocessing transforms\nbatch = preprocess(img).unsqueeze(0)\n\n# Step 4: Use the model and print the predicted category\nprediction = model(batch).squeeze(0).softmax(0)\nclass_id = prediction.argmax().item()\nscore = prediction[class_id].item()\ncategory_name = weights.meta[\"categories\"][class_id]\nprint(f\"{category_name}: {100 * score:.1f}%\")\n```\n\nYou can read more about the new API in the [docs](https://pytorch.org/vision/0.13/models.html). To provide your feedback, please use this dedicated [Github issue](https://github.com/pytorch/vision/issues/5088).\n\n### New architectures and model variants", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "#### Classification\n\nThe [Swin Transformer](https://arxiv.org/abs/2103.14030) and [EfficienetNetV2](https://arxiv.org/abs/2104.00298) are two popular classification models which are often used for downstream vision tasks. This release includes 6 pre-trained weights for their classification variants. Here is how to use the new models:\n\n```python\nimport torch\nfrom torchvision.models import *\n\nimage = torch.rand(1, 3, 224, 224)\nmodel = swin_t(weights=\"DEFAULT\").eval()\nprediction = model(image)\n\nimage = torch.rand(1, 3, 384, 384)\nmodel = efficientnet_v2_s(weights=\"DEFAULT\").eval()\nprediction = model(image)\n```\n\nIn addition to the above, we also provide new variants for existing architectures such as ShuffleNetV2, ResNeXt and MNASNet. The accuracies of all the new pre-trained models obtained on ImageNet-1K are seen below:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "| **Model** | **Acc@1** | **Acc@5** |\n|--------------------------------|-----------|-----------|\n| swin_t | 81.474 | 95.776 |\n| swin_s | 83.196 | 96.36 |\n| swin_b | 83.582 | 96.64 |\n| efficientnet_v2_s | 84.228 | 96.878 |\n| efficientnet_v2_m | 85.112 | 97.156 |\n| efficientnet_v2_l | 85.808 | 97.788 |\n| resnext101_64x4d | 83.246 | 96.454 |\n| resnext101_64x4d (quantized) | 82.898 | 96.326 |\n| shufflenet_v2_x1_5 | 72.996 | 91.086 |\n| shufflenet_v2_x1_5 (quantized) | 72.052 | 0.700 |\n| shufflenet_v2_x2_0 | 76.230 | 93.006 |\n| shufflenet_v2_x2_0 (quantized) | 75.354 | 92.488 |\n| mnasnet0_75 | 71.180 | 90.496 |\n| mnas1_3 | 76.506 | 93.522 |", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "We would like to thank Hu Ye for contributing to TorchVision the Swin Transformer implementation.\n\n#### (BETA) Object Detection and Instance Segmentation\n\nWe have introduced 3 new model variants for RetinaNet, FasterRCNN and MaskRCNN that include several [post-paper architectural optimizations](https://github.com/pytorch/vision/pull/5444) and improved training recipes. All models can be used similarly:\n\n```python\nimport torch\nfrom torchvision.models.detection import *\n\nimages = [torch.rand(3, 800, 600)]\nmodel = retinanet_resnet50_fpn_v2(weights=\"DEFAULT\")\n# model = fasterrcnn_resnet50_fpn_v2(weights=\"DEFAULT\")\n# model = maskrcnn_resnet50_fpn_v2(weights=\"DEFAULT\")\nmodel.eval()\nprediction = model(images)\n```\n\nBelow we present the metrics of the new variants on COCO val2017. In parenthesis we denote the improvement over the old variants:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "| **Model** | **Box mAP** | **Mask mAP** |\n|----------------------------|-------------|--------------|\n| retinanet_resnet50_fpn_v2 | 41.5 (+5.1) | - |\n| fasterrcnn_resnet50_fpn_v2 | 46.7 (+9.7) | - |\n| maskrcnn_resnet50_fpn_v2 | 47.4 (+9.5) | 41.8 (+7.2) |\n\nWe would like to thank Ross Girshick, Piotr Dollar, Vaibhav Aggarwal, Francisco Massa and Hu Ye for their past research and contributions to this work.\n\n### New pre-trained weights \n\n#### SWAG weights\n\nThe ViT and RegNet model variants offer new pre-trained [SWAG](https://arxiv.org/abs/2201.08371) (\u200b\u200bSupervised Weakly from hashtAGs) weights. One of the biggest of these models achieves a whopping 88.6% accuracy on ImageNet-1K. We currently offer two versions of the weights: 1) fine-tuned end-to-end weights on ImageNet-1K (highest accuracy) and 2) frozen trunk weights with a linear classifier fit on ImageNet-1K (great for transfer learning). Below we see the detailed accuracies of each model variant:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "| **Model Weights** | **Acc@1** | **Acc@5** |\n|--------------------------------------------------|-----------|-----------|\n| RegNet_Y_16GF_Weights.IMAGENET1K_SWAG_E2E_V1 | 86.012 | 98.054 |\n| RegNet_Y_16GF_Weights.IMAGENET1K_SWAG_LINEAR_V1 | 83.976 | 97.244 |\n| RegNet_Y_32GF_Weights.IMAGENET1K_SWAG_E2E_V1 | 86.838 | 98.362 |\n| RegNet_Y_32GF_Weights.IMAGENET1K_SWAG_LINEAR_V1 | 84.622 | 97.48 |\n| RegNet_Y_128GF_Weights.IMAGENET1K_SWAG_E2E_V1 | 88.228 | 98.682 |\n| RegNet_Y_128GF_Weights.IMAGENET1K_SWAG_LINEAR_V1 | 86.068 | 97.844 |\n| ViT_B_16_Weights.IMAGENET1K_SWAG_E2E_V1 | 85.304 | 97.65 |\n| ViT_B_16_Weights.IMAGENET1K_SWAG_LINEAR_V1 | 81.886 | 96.18 |\n| ViT_L_16_Weights.IMAGENET1K_SWAG_E2E_V1 | 88.064 | 98.512 |\n| ViT_L_16_Weights.IMAGENET1K_SWAG_LINEAR_V1 | 85.146 | 97.422 |\n| ViT_H_14_Weights.IMAGENET1K_SWAG_E2E_V1 | 88.552 | 98.694 |\n| ViT_H_14_Weights.IMAGENET1K_SWAG_LINEAR_V1 | 85.708 | 97.73 |", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "The SWAG weights are released under the [Attribution-NonCommercial 4.0 International](https://github.com/facebookresearch/SWAG/blob/main/LICENSE) license. We would like to thank Laura Gustafson, Mannat Singh and Aaron Adcock for their work and support in making the weights available to TorchVision.\n\n#### Model Refresh\n\nThe release of the Multi-weight support API enabled us to refresh the most popular models and offer more accurate weights. We improved on average each model by ~3 points. The new recipe used was learned on top of ResNet50 and its details were covered on a [previous blog post](https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "| **Model** | **Old weights** | **New weights** |\n|------------------------------|-----------------|-----------------|\n| efficientnet_b1 | 78.642 | 79.838 |\n| mobilenet_v2 | 71.878 | 72.154 |\n| mobilenet_v3_large | 74.042 | 75.274 |\n| regnet_y_400mf | 74.046 | 75.804 |\n| regnet_y_800mf | 76.42 | 78.828 |\n| regnet_y_1_6gf | 77.95 | 80.876 |\n| regnet_y_3_2gf | 78.948 | 81.982 |\n| regnet_y_8gf | 80.032 | 82.828 |\n| regnet_y_16gf | 80.424 | 82.886 |\n| regnet_y_32gf | 80.878 | 83.368 |\n| regnet_x_400mf | 72.834 | 74.864 |\n| regnet_x_800mf | 75.212 | 77.522 |\n| regnet_x_1_6gf | 77.04 | 79.668 |\n| regnet_x_3_2gf | 78.364 | 81.196 |\n| regnet_x_8gf | 79.344 | 81.682 |\n| regnet_x_16gf | 80.058 | 82.716 |\n| regnet_x_32gf | 80.622 | 83.014 |\n| resnet50 | 76.13 | 80.858 |\n| resnet50 (quantized) | 75.92 | 80.282 |\n| resnet101 | 77.374 | 81.886 |\n| resnet152 | 78.312 | 82.284 |\n| resnext50_32x4d | 77.618 | 81.198 |\n| resnext101_32x8d | 79.312 | 82.834 |\n| resnext101_32x8d (quantized) | 78.986 | 82.574 |\n| wide_resnet50_2 | 78.468 | 81.602 |\n| wide_resnet101_2 | 78.848 | 82.51 |", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "We would like to thank Piotr Dollar, Mannat Singh and Hugo Touvron for their past research and contributions to this work.\n\n### New Augmentations, Layers and Losses\n\nThis release brings a bunch of new primitives which can be used to produce SOTA models. Some highlights include the addition of [AugMix](https://arxiv.org/abs/1912.02781) data-augmentation method, the [DropBlock](https://arxiv.org/abs/1810.12890) layer, the [cIoU/dIoU](https://arxiv.org/abs/1911.08287) loss and [many more](https://github.com/pytorch/vision/issues/5410). We would like to thank Aditya Oke, Abhijit Deo, Yassine Alouini and Hu Ye for contributing to the project and for helping us maintain TorchVision relevant and fresh.\n\n### Documentation", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "We completely revamped our models documentation to make them easier to browse, and added various key information such as supported image sizes, or image pre-processing steps of pre-trained weights. We now have a [main model page](https://pytorch.org/vision/main/models.html) with various [summary tables](https://pytorch.org/vision/main/models.html#table-of-all-available-classification-weights) of available weights, and each model has a [dedicated page](https://pytorch.org/vision/main/models/resnet.html). Each model builder is also documented in their [own page](https://pytorch.org/vision/main/models/generated/torchvision.models.resnet50.html#torchvision.models.resnet50), with more details about the available weights, including accuracy, minimal image size, link to training recipes, and other valuable info. For comparison, our previous models docs are [here](https://pytorch.org/vision/0.12/models.html). To provide feedback on the new documentation, please use the dedicated [Github issue](https://github.com/pytorch/vision/issues/5511).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "## TorchAudio v0.12\n\n### (BETA) Streaming API\n\n\n
\n
\n\n\nStreamReader is TorchAudio\u2019s new I/O API. It is backed by FFmpeg\u2020, and allows users to:\n- Decode audio and video formats, including MP4 and AAC\n- Handle input forms, such as local files, network protocols, microphones, webcams, screen captures and file-like objects\n- Iterate over and decode chunk-by-chunk, while changing the sample rate or frame rate\n- Apply audio and video filters, such as low-pass filter and image scaling\n- Decode video with Nvidia's hardware-based decoder (NVDEC)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "For usage details, please check out the [documentation](https://pytorch.org/audio/0.12.0/io.html#streamreader) and tutorials:\n- [Media Stream API - Pt.1](https://pytorch.org/audio/0.12.0/tutorials/streaming_api_tutorial.html)\n- [Media Stream API - Pt.2](https://pytorch.org/audio/0.12.0/tutorials/streaming_api2_tutorial.html)\n- [Online ASR with Emformer RNN-T](https://pytorch.org/audio/0.12.0/tutorials/online_asr_tutorial.html)\n- [Device ASR with Emformer RNN-T](https://pytorch.org/audio/0.12.0/tutorials/device_asr.html)\n- [Accelerated Video Decoding with NVDEC](https://pytorch.org/audio/0.12.0/hw_acceleration_tutorial.html)\n\n\u2020 To use StreamReader, FFmpeg libraries are required. Please install FFmpeg. The coverage of codecs depends on how these libraries are configured. TorchAudio official binaries are compiled to work with FFmpeg 4 libraries; FFmpeg 5 can be used if TorchAudio is built from source.\n\n### (BETA) CTC Beam Search Decoder", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "TorchAudio integrates the wav2letter CTC beam search decoder from [Flashlight](https://arxiv.org/pdf/2201.12465.pdf) ([GitHub](https://github.com/flashlight/flashlight)). The addition of this inference time decoder enables running end-to-end CTC ASR evaluation using TorchAudio utils.\n\nCustomizable lexicon and lexicon-free decoders are supported, and both are compatible with KenLM n-gram language models or without using a language model. TorchAudio additionally supports downloading token, lexicon, and pretrained KenLM files for the LibriSpeech dataset.\n\nFor usage details, please check out the [documentation](https://pytorch.org/audio/0.12.0/models.decoder.html#ctcdecoder) and [ASR inference tutorial](https://pytorch.org/audio/0.12.0/tutorials/asr_inference_with_ctc_decoder_tutorial.html).\n\n### (BETA) New Beamforming Modules and Methods", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "To improve flexibility in usage, the release adds two new beamforming modules under torchaudio.transforms: [SoudenMVDR](https://pytorch.org/audio/0.12.0/transforms.html#soudenmvdr) and [RTFMVDR](https://pytorch.org/audio/0.12.0/transforms.html#rtfmvdr). The main differences from [MVDR](https://pytorch.org/audio/0.11.0/transforms.html#mvdr) are:\n- Use power spectral density (PSD) and relative transfer function (RTF) matrices as inputs instead of time-frequency masks. The module can be integrated with neural networks that directly predict complex-valued STFT coefficients of speech and noise\n- Add \\'reference_channel\\' as an input argument in the forward method, to allow users to select the reference channel in model training or dynamically change the reference channel in inference", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "Besides the two modules, new function-level beamforming methods are added under torchaudio.functional. These include:\n- [psd](https://pytorch.org/audio/0.12.0/functional.html#psd)\n- [mvdr_weights_souden](https://pytorch.org/audio/0.12.0/functional.html#mvdr-weights-souden)\n- [mvdr_weights_rtf](https://pytorch.org/audio/0.12.0/functional.html#mvdr-weights-rtf)\n- [rtf_evd](https://pytorch.org/audio/0.12.0/functional.html#rtf-evd)\n- [rtf_power](https://pytorch.org/audio/0.12.0/functional.html#rtf-power)\n- [apply_beamforming](https://pytorch.org/audio/0.12.0/functional.html#apply-beamforming)\n\nFor usage details, please check out the documentation at [torchaudio.transforms](https://pytorch.org/audio/0.12.0/transforms.html#multi-channel) and [torchaudio.functional](https://pytorch.org/audio/0.12.0/functional.html#multi-channel) and the [Speech Enhancement with MVDR Beamforming tutorial](https://pytorch.org/audio/0.12.0/tutorials/mvdr_tutorial.html).\n\n## TorchText v0.13\n\n### Glue Datasets", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "We increased the number of datasets in TorchText from 22 to 30 by adding the remaining 8 datasets from the GLUE benchmark (SST-2 was already supported). The complete list of GLUE datasets is as follows:\n- [CoLA](https://nyu-mll.github.io/CoLA/) ([paper](https://arxiv.org/pdf/1805.12471.pdf)): Single sentence binary classification acceptability task\n- [SST-2](https://nlp.stanford.edu/sentiment/) ([paper](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf)): Single sentence binary classification sentiment task\n- [MRPC](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf) ([paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/I05-50025B15D.pdf)): Dual sentence binary classification paraphrase task\n- [QQP](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs): Dual sentence binary classification paraphrase task\n- [STS-B](https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark) ([paper](https://aclanthology.org/S17-2001.pdf)): Single sentence to float regression sentence similarity task\n- [MNLI](https://cims.nyu.edu/~sbowman/multinli/) ([paper](https://cims.nyu.edu/~sbowman/multinli/paper.pdf)): Sentence ternary classification NLI task\n- [QNLI](https://gluebenchmark.com/) ([paper](https://arxiv.org/pdf/1804.07461.pdf)): Sentence binary classification QA and NLI tasks\n- [RTE](https://aclweb.org/aclwiki/Recognizing_Textual_Entailment) ([paper](https://arxiv.org/pdf/2010.03061.pdf)): Dual sentence binary classification NLI task\n- [WNLI](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html) ([paper](http://commonsensereasoning.org/2011/papers/Levesque.pdf)): Dual sentence binary classification coreference and NLI tasks", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "### Scriptable BERT Tokenizer\n\nTorchText has extended support for scriptable tokenizer by adding the WordPiece tokenizer used in BERT. It is one of the commonly used algorithms for splitting input text into sub-words units and was introduced in [Japanese and Korean Voice Search (Schuster et al., 2012)](https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf). \n\nTorchScriptabilty support would allow users to embed the BERT text-preprocessing natively in C++ without needing the support of python runtime. As TorchText now supports the CMAKE build system to natively link torchtext binaries with application code, users can easily integrate BERT tokenizers for deployment needs.\n\nFor usage details, please refer to the corresponding [documentation](https://pytorch.org/text/main/transforms.html#torchtext.transforms.BERTTokenizer).\n\n## TorchRec v0.2.0\n\n### EmbeddingModule + DLRM benchmarks", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "A set of [benchmarking tests](https://github.com/pytorch/torchrec/tree/main/benchmarks), showing performance characteristics of TorchRec\u2019s base modules and research models built out of TorchRec.\n\n### TwoTower Retrieval Example, with FAISS\n\nWe provide an [example](https://github.com/pytorch/torchrec/tree/main/examples/retrieval) demonstrating training a distributed TwoTower (i.e. User-Item) Retrieval model that is sharded using TorchRec. The projected item embeddings are added to an IVFPQ FAISS index for candidate generation. The retrieval model and KNN lookup are bundled in a Pytorch model for efficient end-to-end retrieval.\n\n### Integrations", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "### Integrations\n\nWe demonstrate that TorchRec works out of the box with many components commonly used alongside PyTorch models in production like systems, such as \n- [Training](https://github.com/pytorch/torchrec/tree/main/examples/ray) a TorchRec model on Ray Clusters utilizing the Torchx Ray scheduler\n- [Preprocessing](https://github.com/pytorch/torchrec/tree/main/torchrec/datasets/scripts/nvt) and DataLoading with NVTabular on DLRM\n- [Training](https://github.com/pytorch/torchrec/tree/main/examples/torcharrow) a TorchRec model with on-the-fly preprocessing with TorchArrow showcasing RecSys domain UDFs\n\n### Sequential Embeddings Example: Bert4Rec\n\nWe provide an [example](https://github.com/pytorch/torchrec/tree/main/examples/bert4rec), using TorchRec, that reimplements the [BERT4REC](https://arxiv.org/abs/1904.06690) paper, showcasing EmbeddingCollection for non-pooled embeddings. Using DistributedModelParallel we see a 35% QPS gain over conventional data parallelism.\n\n### (Beta) Planner", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "### (Beta) Planner\n\nThe TorchRec library includes a built-in [planner](https://pytorch.org/torchrec/torchrec.distributed.planner.html) that selects near optimal sharding plan for a given model. The planner attempts to identify the best sharding plan by evaluating a series of proposals which are statically analyzed and fed into an integer partitioner. The planner is able to automatically adjust plans for a wide range of hardware setups, allowing users to scale performance seamlessly from local development environment to large scale production hardware. See this [notebook](https://github.com/pytorch/torchrec/blob/main/torchrec/distributed/planner/Planner_Introduction.ipynb) for a more detailed tutorial.\n\n### (Beta) Inference", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "### (Beta) Inference\n\n[TorchRec Inference](https://github.com/pytorch/torchrec/tree/main/torchrec/inference) is a C++ library that supports multi-gpu inference. The TorchRec library is used to shard models written and packaged in Python via torch.package (an alternative to TorchScript). The torch.deploy library is used to serve inference from C++ by launching multiple Python interpreters carrying the packaged model, thus subverting the GIL. Two models are provided as examples: [DLRM multi-GPU](https://github.com/pytorch/torchrec/blob/main/examples/inference/dlrm_predict.py) (sharded via TorchRec) and [DLRM single-GPU](https://github.com/pytorch/torchrec/blob/main/examples/inference/dlrm_predict_single_gpu.py).\n\n### (Beta) RecMetrics", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "RecMetrics is a [metrics](https://github.com/pytorch/torchrec/tree/main/torchrec/metrics) library that collects common utilities and optimizations for Recommendation models. It extends [torchmetrics](https://torchmetrics.readthedocs.io/en/stable/).\n- A centralized metrics module that allows users to add new metrics\n- Commonly used metrics, including AUC, Calibration, CTR, MSE/RMSE, NE & Throughput\n- Optimization for metrics related operations to reduce the overhead of metric computation\n- Checkpointing\n\n### (Prototype) Single process Batched + Fused Embeddings", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "Previously TorchRec\u2019s abstractions (EmbeddingBagCollection/EmbeddingCollection) over FBGEMM kernels, which provide benefits such as table batching, optimizer fusion, and UVM placement, could only be used in conjunction with DistributedModelParallel. We\u2019ve decoupled these notions from sharding, and introduced the [FusedEmbeddingBagCollection](https://github.com/pytorch/torchrec/blob/eb1247d8a2d16edc4952e5c2617e69acfe5477a5/torchrec/modules/fused_embedding_modules.py#L271), which can be used as a standalone module, with all of the above features, and can also be sharded.\n\n## TorchX v0.2.0\n\nTorchX is a job launcher that makes it easier to run PyTorch in distributed training clusters with many scheduler integrations including Kubernetes and Slurm. We're excited to release TorchX 0.2.0 with a number of improvements. TorchX is currently being used in production in both on-premise and cloud environments.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "Check out the [quickstart](https://pytorch.org/torchx/main/quickstart.html) to start launching local and remote jobs.\n\n### Workspaces\n\nTorchX [now supports workspaces](https://pytorch.org/torchx/main/workspace.html) which allows users to easily launch training jobs using their local workspace. TorchX can automatically build a patch with your local training code on top of a base image to minimize iteration time and time to training.\n\n### .torchxconfig\n\nSpecifying options in [.torchxconfig](https://pytorch.org/torchx/latest/runner.config.html) saves you from having to type long CLI commands each time you launch a job. You can also define project level generic configs and drop a config file in your home directory for user-level overrides.\n\n### Expanded Scheduler Support\n\nTorchX now supports [AWS Batch](https://pytorch.org/torchx/main/schedulers/aws_batch.html) and [Ray (experimental)](https://pytorch.org/torchx/main/schedulers/ray.html) schedulers in addition to our existing integrations.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "### Distributed Training On All Schedulers\n\nThe TorchX dist.ddp component now works on all schedulers without any configuration. Distributed training workers will automatically discover each other when using [torchelastic](https://pytorch.org/docs/stable/distributed.elastic.html) via [the builtin dist.ddp component](https://pytorch.org/torchx/main/components/distributed.html).\n\n### Hyper Parameter Optimization\n\nTorchX [integrates with Ax](https://ax.dev/versions/latest/api/runners.html#module-ax.runners.torchx) to let you scale hyper-parameter optimizations (HPO) by launching the search trials onto remote clusters.\n\n### File and Device Mounts\n\nTorchX now supports [remote filesystem mounts and custom devices](https://pytorch.org/torchx/main/specs.html#mounts). This enables your PyTorch jobs to efficiently access cloud storage such as NFS or Lustre. The device mounts enables usage of network accelerators like Infiniband and custom inference/training accelerators.\n\n## FBGemm v0.2.0", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "## FBGemm v0.2.0\n\nThe FBGEMM library contains optimized kernels meant to improve the performance of PyTorch workloads. We\u2019ve added a number of new features and optimizations over the last few months that we are excited to report.\n\n### Inference Table Batched Embedding (TBE)\n\nThe [table batched embedding bag](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops.py#L1541) (TBE) operator is an important base operation for embedding lookup for recommendation system inference on GPU. We added the following enhancements for performance and flexibility:\n\nAlignment restriction removed\n- Embedding dimension \\* data type size had to be multiple of 4B before and now, it is 1B.\n\nUnified Virtual Memory (UVM) caching kernel optimizations\n- UVM caching kernels now scale linearly with # of tables using UVM caching. Previously, it was having similar overhead as all tables using UVM caching\n- UVM caching kernel overhead is much smaller than before", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "### Inference FP8 Table Batched Embedding (TBE) \n\nThe [table batched embedding bag](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops.py#L1541) (TBE) previously supported FP32, FP16, INT8, INT4, and INT2 embedding weight types. While these weight types work well in many models, we integrate FP8 weight types (in both GPU and CPU operations) to allow for numerical and performance evaluations of FP8 in our models. Compared to INT8, FP8 does not require the additional bias and scale storage and calculations. Additionally, the next generation of H100 GPUs has the FP8 support on Tensor Core (mainly matmul ops).\n\n### Jagged Tensor Kernels", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "We added optimized kernels to speed up [TorchRec JaggedTensor](https://pytorch.org/torchrec/torchrec.sparse.html). The purpose of JaggedTensor is to handle the case where one dimension of the input data is \u201cjagged\u201d, meaning that each consecutive row in a given dimension may be a different length, which is often the case with sparse feature inputs in recommendation systems. The internal representation is shown below:\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "We added ops for [converting jagged tensors from sparse to dense formats](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/src/jagged_tensor_ops_cpu.cpp#L982) [and back](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/src/jagged_tensor_ops_cpu.cpp#L968), performing [matrix multiplications with jagged tensors](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/src/jagged_tensor_ops_cpu.cpp#L996), and [elementwise ops](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/src/jagged_tensor_ops_cpu.cpp#L995).\n \n### Optimized permute102-baddbmm-permute102", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "It is difficult to fuse various matrix multiplications where the batch size is not the batch size of the model, switching the batch dimension is a quick solution. We created the [permute102_baddbmm_permute102](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/src/sparse_ops_cpu.cpp#L2401) operation that switches the first and the second dimension, performs the batched matrix multiplication and then switches back. Currently we only support forward pass with FP16 data type and will support FP32 type and backward pass in the future.\n\n### Optimized index_select for dim 0 index selection", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "index_select is normally used as part of a sparse operation. While PyTorch supports a generic index_select for an arbitrary-dimension index selection, its performance for a special case like the dim 0 index selection is suboptimal. For this reason, we implement a [specialized index_select for dim 0](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/src/sparse_ops_cpu.cpp#L2421). In some cases, we have observed 1.4x performance gain from FBGEMM\u2019s index_select compared to the one from PyTorch (using uniform index distribution).\n\nMore about the implementation of influential instances can be found on our [GitHub](https://github.com/pytorch/captum/tree/master/captum/influence) page and [tutorials](https://captum.ai/tutorials/TracInCP_Tutorial).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
{"page_content": "Thanks for reading, If you\u2019re interested in these updates and want to join the PyTorch community, we encourage you to join the [discussion forums](https://discuss.pytorch.org/) and [open GitHub issues](https://github.com/pytorch/pytorch/issues). To get the latest news from PyTorch, follow us on [Twitter](https://twitter.com/PyTorch), [Medium](https://medium.com/pytorch), [YouTube](https://www.youtube.com/pytorch), and [LinkedIn](https://www.linkedin.com/company/pytorch).\n\nCheers!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.12-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'The torch.linalg module: Accelerated Linear Algebra with Autograd in PyTorch'\nauthor: Mike Ruberry, Ivan Yashchuk, Xiao Wang, Mario Lezcano and Natalia Gimelshein\nfeatured-img: 'assets/images/cholesky-decomposition.png'\n---", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "Linear algebra is essential to deep learning and scientific computing, and it\u2019s always been a core part of PyTorch. PyTorch 1.9 extends PyTorch\u2019s support for linear algebra operations with the ```torch.linalg``` module. This module, documented [here](https://pytorch.org/docs/master/linalg.html?highlight=linalg#module-torch.linalg), has 26 operators, including faster and easier to use versions of older PyTorch operators, every function from [NumPy\u2019s linear algebra", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "module](https://numpy.org/doc/stable/reference/routines.linalg.html) extended with accelerator and autograd support, and a few operators that are completely new. This makes the ```torch.linalg``` immediately familiar to NumPy users and an exciting update to PyTorch\u2019s linear algebra support.", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "# NumPy-like linear algebra in PyTorch\n\nIf you\u2019re familiar with NumPy\u2019s linear algebra module then it\u2019ll be easy to start using ```torch.linalg```. In most cases it\u2019s a drop-in replacement. Let\u2019s looking at drawing samples from a [multivariate normal distribution](https://en.wikipedia.org/wiki/Multivariate_normal_distribution) using the [Cholesky decomposition](https://en.wikipedia.org/wiki/Cholesky_decomposition) as a motivating example to demonstrate this:\n\n```python\nimport numpy as np", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "# Creates inputs\nnp.random.seed(0)\nmu_np = np.random.rand(4)\nL = np.random.rand(4, 4)\n# Covariance matrix sigma is positive-definite\nsigma_np = L @ L.T + np.eye(4)\nnormal_noise_np = np.random.standard_normal(mu_np.size)\n\ndef multivariate_normal_sample_np(mu, sigma, normal_noise):\n return mu + np.linalg.cholesky(sigma) @ normal_noise\n\nprint(\"Random sample: \", \n multivariate_normal_sample_np(mu_np, sigma_np, normal_noise_np))\n: Random sample: [2.9502426 1.78518077 1.83168697 0.90798228]", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "Now let\u2019s see the same sampler implemented in PyTorch:\n\n```python\nimport torch\n\ndef multivariate_normal_sample_torch(mu, sigma, normal_noise):\n return mu + torch.linalg.cholesky(sigma) @ normal_noise", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "The two functions are identical, and we can validate their behavior by calling the function with the same arguments wrapped as PyTorch tensors:\n\n```python\n# NumPy arrays are wrapped as tensors and share their memory\nmu_torch = torch.from_numpy(mu_np)\nsigma_torch = torch.from_numpy(sigma_np)\nnormal_noise_torch = torch.from_numpy(normal_noise_np)\n\nmultivariate_normal_sample_torch(mu_torch, sigma_torch, normal_noise_torch)\n: tensor([2.9502, 1.7852, 1.8317, 0.9080], dtype=torch.float64)", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "The only difference is in how PyTorch prints tensors by default.", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "The Cholesky decomposition can also help us quickly compute the probability density function of the non-degenerate multivariate normal distribution. One of the expensive terms in that computation is the square root of the determinant of the covariance matrix. Using [properties of the determinant](https://en.wikipedia.org/wiki/Determinant#Properties_of_the_determinant) and the Cholesky decomposition we can calculate the same result faster than the naive computation, however. Here\u2019s the NumPy program that", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "demonstrates this:", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "```python\nsqrt_sigma_det_np = np.sqrt(np.linalg.det(sigma_np))\nsqrt_L_det_np = np.prod(np.diag(np.linalg.cholesky(sigma_np)))\n\nprint(\"|sigma|^0.5 = \", sqrt_sigma_det_np)\n: |sigma|^0.5 = 4.237127491242027\n \nprint(\"|L| = \", sqrt_L_det_np)\n: |L| = 4.237127491242028", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "And here\u2019s the same validation in PyTorch:\n\n```python\nsqrt_sigma_det_torch = torch.sqrt(torch.linalg.det(sigma_torch))\nsqrt_L_det_torch = torch.prod(torch.diag(torch.linalg.cholesky(sigma_torch)))\n\nprint(\"|sigma|^0.5 = \", sqrt_sigma_det_torch)\n: |sigma|^0.5 = tensor(4.2371, dtype=torch.float64) \n\nprint(\"|L| = \", sqrt_L_det_torch)\n: |L| = tensor(4.2371, dtype=torch.float64)", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "We can measure the difference in run time using PyTorch\u2019s built-in benchmark utility:\n\n```python\nimport torch.utils.benchmark as benchmark\n\nt0 = benchmark.Timer(\n stmt='torch.sqrt(torch.linalg.det(sigma))',\n globals={'sigma': sigma_torch})\n\nt1 = benchmark.Timer(\n stmt='torch.prod(torch.diag(torch.linalg.cholesky(sigma)))',\n globals={'sigma': sigma_torch})\n\nprint(t0.timeit(100))\n: torch.sqrt(torch.linalg.det(sigma))\n 80.80 us\n 1 measurement, 100 runs , 1 thread", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "print(t1.timeit(100))\n: torch.prod(torch.diag(torch.linalg.cholesky(sigma)))\n 11.56 us\n 1 measurement, 100 runs , 1 thread\n ```\n \nDemonstrating that the approach using the Cholesky decomposition can be significantly faster. Behind the scenes, PyTorch\u2019s linear algebra module uses OpenBLAS or MKL implementations of the LAPACK standard to maximize its CPU performance.\n\n# Autograd Support", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch\u2019s linear algebra module doesn\u2019t just implement the same functions as NumPy\u2019s linear algebra module (and a few more), it also extends them with autograd and CUDA support.\n\nLet\u2019s look at a very simple program that just computes an inverse and the gradient of that operation to show how autograd works:\n\n```python\nt = torch.tensor(((1, 2), (3, 4)), dtype=torch.float32, requires_grad=True)\n\ninv = torch.linalg.inv(t)\ninv.backward(torch.ones_like(inv))", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "print(t.grad)\n: tensor([[-0.5000, 0.5000],\n [ 0.5000, -0.5000]])", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "We can mimic the same computation in NumPy by defining the autograd formula ourselves:\n\n```python\na = np.array(((1, 2), (3, 4)), dtype=np.float32)\n\ninv_np = np.linalg.inv(a)\n\ndef inv_backward(result, grad):\n return -(result.transpose(-2, -1) @ (grad @ result.transpose(-2, -1)))\ngrad_np = inv_backward(inv_np, np.ones_like(inv_np))\n\nprint(grad_np)\n: [[-0.5 0.5]\n [ 0.5 -0.5]]", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "Of course, as programs become more complicated it\u2019s convenient to have builtin autograd support, and PyTorch\u2019s linear algebra module supports both real and complex autograd.\n\n# CUDA Support", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "Support for autograd and accelerators, like CUDA devices, is a core part of PyTorch. The ```torch.linalg``` module was developed with NVIDIA\u2019s PyTorch and cuSOLVER teams, who helped optimize its performance on CUDA devices with the cuSOLVER, cuBLAS, and MAGMA libraries. These improvements make PyTorch\u2019s CUDA linear algebra operations faster than ever. For example, let\u2019s look at the performance of PyTorch 1.9\u2019s ```torch.linalg.cholesky``` vs. PyTorch 1.8\u2019s (now deprecated) ```torch.cholesky```:", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
\n\n(The above charts were created using an Ampere A100 GPU with CUDA 11.3, cuSOLVER 11.1.1.58, and MAGMA 2.5.2. Matrices are in double precision.)", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "These charts show that performance has increased significantly on larger matrices, and that batched performance is better across the board. Other linear algebra operations, including ```torch.linalg.qr``` and ```torch.linalg.lstsq```, have also had their CUDA performance improved.\n\n# Beyond NumPy", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "In addition to offering all the functions in NumPy\u2019s linear algebra module with support for autograd and accelerators, ```torch.linalg``` has a few new functions of its own. NumPy\u2019s ```linalg.norm``` does not allow users to compute vector norms over arbitrary subsets of dimensions, so to enable this functionality we added ```torch.linalg.vector_norm```. We\u2019ve also started modernizing other linear algebra functionality in PyTorch, so we created ```torch.linalg.householder_product``` to replace the older", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "```torch.orgqr```, and we plan to continue adding more linear algebra functionality in the future, too.", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "# The Future of Linear Algebra in PyTorch", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "The ```torch.linalg``` module is fast and familiar with great support for autograd and accelerators. It\u2019s already being used in libraries like [botorch](https://github.com/pytorch/botorch), too. But we\u2019re not stopping here. We plan to continue updating more of PyTorch\u2019s existing linear algebra functionality (like ```torch.lobpcg```) and offering more support for low rank and sparse linear algebra. We also want to hear your feedback on how we can improve, so start a conversation on the", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "[forum](https://discuss.pytorch.org/) or file an issue on our [Github](https://github.com/pytorch/pytorch) and share your thoughts.", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "We look forward to hearing from you and seeing what the community does with PyTorch\u2019s new linear algebra functionality!", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"Democratizing AI with PyTorch Foundation and ROCm\u2122 support for PyTorch\"\nauthor: AMD\n---\n\n{:width=\"50%\" style=\"display:block; margin-left:auto; margin-right:auto\"}", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Last year, Meta announced that [PyTorch](https://pytorch.org/) joined the Linux Foundation as a neutral home for growing the machine learning project and community with AMD representation as a part of the founding membership and governing board.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "[PyTorch Foundation\u2019s](https://pytorch.org/foundation) mission is to drive AI adoption by democratizing its software ecosystem through open source principles aligning with the AMD core principle of an Open software ecosystem. AMD strives to foster innovation through the support for latest generations of hardware, tools, libraries, and other components to simplify and accelerate adoption of AI across a broad range of scientific discoveries.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "AMD, along with key PyTorch codebase developers (including those at Meta AI), delivered a set of updates to the ROCm\u2122 open software ecosystem that brings stable support for AMD Instinct\u2122 accelerators as well as many Radeon\u2122 GPUs. This now gives PyTorch developers the ability to build their next great AI solutions leveraging AMD GPU", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "accelerators & ROCm. The support from PyTorch community in identifying gaps, prioritizing key updates, providing feedback for performance optimizing and supporting our journey from \u201cBeta\u201d to \u201cStable\u201d was immensely helpful and we deeply appreciate the strong collaboration between the two teams at AMD and PyTorch. The move for ROCm support from \u201cBeta\u201d to \u201cStable\u201d came in the PyTorch 1.12 release (June 2022) brings the added support to easily run PyTorch on native environment without having to configure custom", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "dockers. This is a sign of confidence about the quality of support and performance of PyTorch using AMD Instinct and ROCm. The results of these collaborative efforts are evident in the performance measured on key industry benchmarks like Microsoft\u2019s SuperBench shown below in Graph 1.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "
\n
\n
\n
", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "\u201cWe are excited to see the significant impact of developers at AMD to contribute to and extend features within PyTorch to make AI models run in a more performant, efficient, and scalable way. A great example of this is the thought-leadership around unified memory approaches between the framework and future hardware systems, and we look forward to seeing that feature progress.\u201d
\n- Soumith Chintala, PyTorch lead-maintainer and Director of Engineering, Meta AI\n
\n
\n
", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "The progressive improvements on both the AMD CDNA\u2122 architecture as well as ROCm and PyTorch shows single GPU model throughput increase from AMD Instinct MI100 to the latest generation AMD Instinct MI200 family GPUs going from ROCm 4.2 to ROCm 5.3 and from PyTorch 1.7 to PyTorch 1.12.\n\n{:width=\"100%\"}", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Graph 1: ML model performance over generation using Microsoft Superbench Suite 1, 2, 3\n\n\nBelow are a few of the key updates for ROCm support since the PyTorch 1.12 release", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Full Continuous Integration (CI) for ROCm on PyTorch\n\nWith the ROCm support for PyTorch move from \u201cBeta\u201d to \u201cStable,\u201d all the functions and features commits are now verified through a full Continuous Integration (CI) process. The CI process helps ensure the proper build and test process ahead of an expected Docker and PIP wheel release with stable commits forthcoming.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Support for [Kineto Profiler](https://github.com/pytorch/kineto)\n\nThe addition of Kineto profiler support to ROCm now helps developers and users understand performance bottlenecks through effective diagnosis and profiling tools. The tool also provides recommendations to improve known issues and visualization through TensorBoard UI.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Key PyTorch Libraries support added\n\nPyTorch ecosystem libraries like [TorchText](https://pytorch.org/tutorials/beginner/text_sentiment_ngrams_tutorial.html) (Text classification), [TorchRec](https://pytorch.org/torchrec/) (libraries for recommender systems - RecSys), [TorchVision](https://pytorch.org/vision/stable/index.html) (Computer Vision), [TorchAudio](https://pytorch.org/audio/stable/index.html) (audio and signal processing) are fully supported since ROCm 5.1 and upstreamed with PyTorch 1.12.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Key libraries provided with the ROCm software stack including [MIOpen](https://github.com/ROCmSoftwarePlatform/MIOpen) (Convolution models), [RCCL](https://github.com/ROCmSoftwarePlatform/rccl) (ROCm Collective Communications) and [rocBLAS](https://github.com/ROCmSoftwarePlatform/rocBLAS) (BLAS for transformers) were further optimized to offer new potential efficiencies and higher performance.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'The torch.linalg module: Accelerated Linear Algebra with Autograd in PyTorch'\nauthor: Mike Ruberry, Ivan Yashchuk, Xiao Wang, Mario Lezcano and Natalia Gimelshein\nfeatured-img: 'assets/images/cholesky-decomposition.png'\n---\n\nLinear algebra is essential to deep learning and scientific computing, and it\u2019s always been a core part of PyTorch. PyTorch 1.9 extends PyTorch\u2019s support for linear algebra operations with the ```torch.linalg``` module. This module, documented [here](https://pytorch.org/docs/master/linalg.html?highlight=linalg#module-torch.linalg), has 26 operators, including faster and easier to use versions of older PyTorch operators, every function from [NumPy\u2019s linear algebra module](https://numpy.org/doc/stable/reference/routines.linalg.html) extended with accelerator and autograd support, and a few operators that are completely new. This makes the ```torch.linalg``` immediately familiar to NumPy users and an exciting update to PyTorch\u2019s linear algebra support.", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
+{"page_content": "# NumPy-like linear algebra in PyTorch\n\nIf you\u2019re familiar with NumPy\u2019s linear algebra module then it\u2019ll be easy to start using ```torch.linalg```. In most cases it\u2019s a drop-in replacement. Let\u2019s looking at drawing samples from a [multivariate normal distribution](https://en.wikipedia.org/wiki/Multivariate_normal_distribution) using the [Cholesky decomposition](https://en.wikipedia.org/wiki/Cholesky_decomposition) as a motivating example to demonstrate this:\n\n```python\nimport numpy as np\n\n# Creates inputs\nnp.random.seed(0)\nmu_np = np.random.rand(4)\nL = np.random.rand(4, 4)\n# Covariance matrix sigma is positive-definite\nsigma_np = L @ L.T + np.eye(4)\nnormal_noise_np = np.random.standard_normal(mu_np.size)\n\ndef multivariate_normal_sample_np(mu, sigma, normal_noise):\n return mu + np.linalg.cholesky(sigma) @ normal_noise\n\nprint(\"Random sample: \", \n multivariate_normal_sample_np(mu_np, sigma_np, normal_noise_np))\n: Random sample: [2.9502426 1.78518077 1.83168697 0.90798228]\n```", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
+{"page_content": "Now let\u2019s see the same sampler implemented in PyTorch:\n\n```python\nimport torch\n\ndef multivariate_normal_sample_torch(mu, sigma, normal_noise):\n return mu + torch.linalg.cholesky(sigma) @ normal_noise\n```\n\nThe two functions are identical, and we can validate their behavior by calling the function with the same arguments wrapped as PyTorch tensors:\n\n```python\n# NumPy arrays are wrapped as tensors and share their memory\nmu_torch = torch.from_numpy(mu_np)\nsigma_torch = torch.from_numpy(sigma_np)\nnormal_noise_torch = torch.from_numpy(normal_noise_np)\n\nmultivariate_normal_sample_torch(mu_torch, sigma_torch, normal_noise_torch)\n: tensor([2.9502, 1.7852, 1.8317, 0.9080], dtype=torch.float64)\n```\n\nThe only difference is in how PyTorch prints tensors by default.", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
+{"page_content": "The Cholesky decomposition can also help us quickly compute the probability density function of the non-degenerate multivariate normal distribution. One of the expensive terms in that computation is the square root of the determinant of the covariance matrix. Using [properties of the determinant](https://en.wikipedia.org/wiki/Determinant#Properties_of_the_determinant) and the Cholesky decomposition we can calculate the same result faster than the naive computation, however. Here\u2019s the NumPy program that demonstrates this:\n\n```python\nsqrt_sigma_det_np = np.sqrt(np.linalg.det(sigma_np))\nsqrt_L_det_np = np.prod(np.diag(np.linalg.cholesky(sigma_np)))\n\nprint(\"|sigma|^0.5 = \", sqrt_sigma_det_np)\n: |sigma|^0.5 = 4.237127491242027\n \nprint(\"|L| = \", sqrt_L_det_np)\n: |L| = 4.237127491242028\n```\n\nAnd here\u2019s the same validation in PyTorch:\n\n```python\nsqrt_sigma_det_torch = torch.sqrt(torch.linalg.det(sigma_torch))\nsqrt_L_det_torch = torch.prod(torch.diag(torch.linalg.cholesky(sigma_torch)))", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
+{"page_content": "print(\"|sigma|^0.5 = \", sqrt_sigma_det_torch)\n: |sigma|^0.5 = tensor(4.2371, dtype=torch.float64) \n\nprint(\"|L| = \", sqrt_L_det_torch)\n: |L| = tensor(4.2371, dtype=torch.float64)\n```\n\nWe can measure the difference in run time using PyTorch\u2019s built-in benchmark utility:\n\n```python\nimport torch.utils.benchmark as benchmark\n\nt0 = benchmark.Timer(\n stmt='torch.sqrt(torch.linalg.det(sigma))',\n globals={'sigma': sigma_torch})\n\nt1 = benchmark.Timer(\n stmt='torch.prod(torch.diag(torch.linalg.cholesky(sigma)))',\n globals={'sigma': sigma_torch})\n\nprint(t0.timeit(100))\n: torch.sqrt(torch.linalg.det(sigma))\n 80.80 us\n 1 measurement, 100 runs , 1 thread", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
+{"page_content": "print(t1.timeit(100))\n: torch.prod(torch.diag(torch.linalg.cholesky(sigma)))\n 11.56 us\n 1 measurement, 100 runs , 1 thread\n ```\n \nDemonstrating that the approach using the Cholesky decomposition can be significantly faster. Behind the scenes, PyTorch\u2019s linear algebra module uses OpenBLAS or MKL implementations of the LAPACK standard to maximize its CPU performance.\n\n# Autograd Support\n\nPyTorch\u2019s linear algebra module doesn\u2019t just implement the same functions as NumPy\u2019s linear algebra module (and a few more), it also extends them with autograd and CUDA support.\n\nLet\u2019s look at a very simple program that just computes an inverse and the gradient of that operation to show how autograd works:\n\n```python\nt = torch.tensor(((1, 2), (3, 4)), dtype=torch.float32, requires_grad=True)\n\ninv = torch.linalg.inv(t)\ninv.backward(torch.ones_like(inv))\n\nprint(t.grad)\n: tensor([[-0.5000, 0.5000],\n [ 0.5000, -0.5000]])\n```\n\nWe can mimic the same computation in NumPy by defining the autograd formula ourselves:", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
+{"page_content": "```python\na = np.array(((1, 2), (3, 4)), dtype=np.float32)\n\ninv_np = np.linalg.inv(a)\n\ndef inv_backward(result, grad):\n return -(result.transpose(-2, -1) @ (grad @ result.transpose(-2, -1)))\ngrad_np = inv_backward(inv_np, np.ones_like(inv_np))\n\nprint(grad_np)\n: [[-0.5 0.5]\n [ 0.5 -0.5]]\n```\n\nOf course, as programs become more complicated it\u2019s convenient to have builtin autograd support, and PyTorch\u2019s linear algebra module supports both real and complex autograd.\n\n# CUDA Support\n\nSupport for autograd and accelerators, like CUDA devices, is a core part of PyTorch. The ```torch.linalg``` module was developed with NVIDIA\u2019s PyTorch and cuSOLVER teams, who helped optimize its performance on CUDA devices with the cuSOLVER, cuBLAS, and MAGMA libraries. These improvements make PyTorch\u2019s CUDA linear algebra operations faster than ever. For example, let\u2019s look at the performance of PyTorch 1.9\u2019s ```torch.linalg.cholesky``` vs. PyTorch 1.8\u2019s (now deprecated) ```torch.cholesky```:", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
+{"page_content": "\n

\n
\n\n(The above charts were created using an Ampere A100 GPU with CUDA 11.3, cuSOLVER 11.1.1.58, and MAGMA 2.5.2. Matrices are in double precision.)\n\nThese charts show that performance has increased significantly on larger matrices, and that batched performance is better across the board. Other linear algebra operations, including ```torch.linalg.qr``` and ```torch.linalg.lstsq```, have also had their CUDA performance improved.\n\n# Beyond NumPy", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
+{"page_content": "# Beyond NumPy\n\nIn addition to offering all the functions in NumPy\u2019s linear algebra module with support for autograd and accelerators, ```torch.linalg``` has a few new functions of its own. NumPy\u2019s ```linalg.norm``` does not allow users to compute vector norms over arbitrary subsets of dimensions, so to enable this functionality we added ```torch.linalg.vector_norm```. We\u2019ve also started modernizing other linear algebra functionality in PyTorch, so we created ```torch.linalg.householder_product``` to replace the older ```torch.orgqr```, and we plan to continue adding more linear algebra functionality in the future, too.\n\n# The Future of Linear Algebra in PyTorch", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
+{"page_content": "The ```torch.linalg``` module is fast and familiar with great support for autograd and accelerators. It\u2019s already being used in libraries like [botorch](https://github.com/pytorch/botorch), too. But we\u2019re not stopping here. We plan to continue updating more of PyTorch\u2019s existing linear algebra functionality (like ```torch.lobpcg```) and offering more support for low rank and sparse linear algebra. We also want to hear your feedback on how we can improve, so start a conversation on the [forum](https://discuss.pytorch.org/) or file an issue on our [Github](https://github.com/pytorch/pytorch) and share your thoughts. \n\nWe look forward to hearing from you and seeing what the community does with PyTorch\u2019s new linear algebra functionality!", "metadata": {"source": "https://pytorch.org/blog/torch-linalg-autograd/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"Democratizing AI with PyTorch Foundation and ROCm\u2122 support for PyTorch\"\nauthor: AMD\n---\n\n{:width=\"50%\" style=\"display:block; margin-left:auto; margin-right:auto\"}\n\nLast year, Meta announced that [PyTorch](https://pytorch.org/) joined the Linux Foundation as a neutral home for growing the machine learning project and community with AMD representation as a part of the founding membership and governing board.\n\n[PyTorch Foundation\u2019s](https://pytorch.org/foundation) mission is to drive AI adoption by democratizing its software ecosystem through open source principles aligning with the AMD core principle of an Open software ecosystem. AMD strives to foster innovation through the support for latest generations of hardware, tools, libraries, and other components to simplify and accelerate adoption of AI across a broad range of scientific discoveries.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
\nAMD, along with key PyTorch codebase developers (including those at Meta AI), delivered a set of updates to the ROCm\u2122 open software ecosystem that brings stable support for AMD Instinct\u2122 accelerators as well as many Radeon\u2122 GPUs. This now gives PyTorch developers the ability to build their next great AI solutions leveraging AMD GPU accelerators & ROCm. The support from PyTorch community in identifying gaps, prioritizing key updates, providing feedback for performance optimizing and supporting our journey from \u201cBeta\u201d to \u201cStable\u201d was immensely helpful and we deeply appreciate the strong collaboration between the two teams at AMD and PyTorch. The move for ROCm support from \u201cBeta\u201d to \u201cStable\u201d came in the PyTorch 1.12 release (June 2022) brings the added support to easily run PyTorch on native environment without having to configure custom dockers. This is a sign of confidence about the quality of support and performance of PyTorch using AMD Instinct and ROCm. The results of these collaborative efforts are evident in the performance measured on key industry benchmarks like Microsoft\u2019s SuperBench shown below in Graph 1.\n
\n
\n
\n
\n\u201cWe are excited to see the significant impact of developers at AMD to contribute to and extend features within PyTorch to make AI models run in a more performant, efficient, and scalable way. A great example of this is the thought-leadership around unified memory approaches between the framework and future hardware systems, and we look forward to seeing that feature progress.\u201d
\n- Soumith Chintala, PyTorch lead-maintainer and Director of Engineering, Meta AI\n
\n
\n
", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "The progressive improvements on both the AMD CDNA\u2122 architecture as well as ROCm and PyTorch shows single GPU model throughput increase from AMD Instinct MI100 to the latest generation AMD Instinct MI200 family GPUs going from ROCm 4.2 to ROCm 5.3 and from PyTorch 1.7 to PyTorch 1.12.\n\n{:width=\"100%\"}\n\nGraph 1: ML model performance over generation using Microsoft Superbench Suite 1, 2, 3\n\n\nBelow are a few of the key updates for ROCm support since the PyTorch 1.12 release\n\n \n\n## Full Continuous Integration (CI) for ROCm on PyTorch", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "With the ROCm support for PyTorch move from \u201cBeta\u201d to \u201cStable,\u201d all the functions and features commits are now verified through a full Continuous Integration (CI) process. The CI process helps ensure the proper build and test process ahead of an expected Docker and PIP wheel release with stable commits forthcoming.\n\n\n## Support for [Kineto Profiler](https://github.com/pytorch/kineto)\n\nThe addition of Kineto profiler support to ROCm now helps developers and users understand performance bottlenecks through effective diagnosis and profiling tools. The tool also provides recommendations to improve known issues and visualization through TensorBoard UI.\n\n## Key PyTorch Libraries support added", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "PyTorch ecosystem libraries like [TorchText](https://pytorch.org/tutorials/beginner/text_sentiment_ngrams_tutorial.html) (Text classification), [TorchRec](https://pytorch.org/torchrec/) (libraries for recommender systems - RecSys), [TorchVision](https://pytorch.org/vision/stable/index.html) (Computer Vision), [TorchAudio](https://pytorch.org/audio/stable/index.html) (audio and signal processing) are fully supported since ROCm 5.1 and upstreamed with PyTorch 1.12.\n\nKey libraries provided with the ROCm software stack including [MIOpen](https://github.com/ROCmSoftwarePlatform/MIOpen) (Convolution models), [RCCL](https://github.com/ROCmSoftwarePlatform/rccl) (ROCm Collective Communications) and [rocBLAS](https://github.com/ROCmSoftwarePlatform/rocBLAS) (BLAS for transformers) were further optimized to offer new potential efficiencies and higher performance.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
{"page_content": "MIOpen innovates on several fronts, such as implementing fusion to optimize for memory bandwidth and GPU launch overheads, providing an auto-tuning infrastructure to overcome the large design space of problem configurations, and implementing different algorithms to optimize convolutions for different filter and input sizes. MIOpen is one of the first libraries to publicly support the bfloat16 data-type for convolutions, allowing efficient training at lower precision maintaining expected accuracy.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "RCCL (pronounced \"Rickle\") is a stand-alone library of standard collective communication routines for GPUs, implementing all-reduce, all-gather, reduce, broadcast, reduce-scatter, gather, scatter, and all-to-all. There is support for direct GPU-to-GPU send and receive operations. It has been optimized to achieve high bandwidth on platforms using PCIe\u00ae, Infinity Fabric\u2122 (GPU to GPU) as well as networking using InfiniBand Verbs or TCP/IP sockets. RCCL supports an arbitrary number of GPUs installed in single", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "or multiple nodes and can be used in either single- or multi-process (e.g., MPI) applications.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Along with the above key highlights, over 50 features and functionality improvements were completed jointly between AMD and PyTorch to add stable support for ROCm. These include improvements to tools, compilers, runtime, graph optimizations through TorchScript, INT8 quant path usage, and [ONNX runtime integration](https://onnxruntime.ai/) including support for Navi 21 based Radeon\u2122 PRO datacenter graphics card to name a few.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "[AITemplate](https://github.com/facebookincubator/AITemplate) Inference Engine", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "MetaAI recently published a blog announcing the release of its open source AITemplate ([link](https://ai.facebook.com/blog/gpu-inference-engine-nvidia-amd-open-source/)) for a unified inference system supporting AMD Instinct GPU accelerators using the AMD ROCm stack. This Python based framework can help significantly improve performance through increased utilization of AMD matrix cores for transformer blocks. This is achieved through the AMD [Composable Kernel (CK)", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "library](https://github.com/ROCmSoftwarePlatform/composable_kernel) which provides performance critical Kernels for ML AI workloads across multiple architectures including GPUs and CPUs through HIP & C++.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Moreover, the AITemplate also provides out-of-the-box support for widely used AI models like BERT, ResNET, Vision Transformer, Stable Diffusion etc. simplifying deployment process through these pretrained models.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "What\u2019s coming with future ROCm releases?", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Unified memory models for CPU + GPU", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "As system architecture evolves to address the complexity of large problem sizes and data sets, memory management becomes a key performance bottle neck that needs a cohesive strategy to be addressed through innovations at both hardware and software levels. AMD is uniquely positioned to address this problem with its effective data center solutions integrating AMD EPYC\u2122 CPU cores with its AMD Instinct GPU compute units in a truly unified datacenter APU (Accelerated Processing Unit) form factor set to be", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "launched in 2H 2023.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "The software work to leverage the unified CPU + GPU memory has already started in collaboration with the PyTorch team, to enable the usage of a fast, low latency, synchronized memory model that enables not only AMD but also other AI accelerators to address the complex memory management problem of today. We are looking forward to this joint effort and announcement soon.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Acknowledgement\n\nThe content in this blog highlights the joint work between AMD and key PyTorch contributors including Meta, working on many of the core features, as well as Microsoft enabling ONNX Runtime support. We are looking forward to working with the other founding members at the PyTorch Foundation on the next steps and improvements to democratize and grow adoption of PyTorch across the industry.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "CAUTIONARY STATEMENT", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "This blog contains forward-looking statements concerning Advanced Micro Devices, Inc. (AMD) such as the availability, timing and expected benefits of an AMD datacenter APU form factor, which are made pursuant to the Safe Harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are commonly identified by words such as \"would,\" \"may,\" \"expects,\" \"believes,\" \"plans,\" \"intends,\" \"projects\" and other terms with similar meaning. Investors are cautioned that the", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "forward-looking statements in this blog are based on current beliefs, assumptions and expectations, speak only as of the date of this blog and involve risks and uncertainties that could cause actual results to differ materially from current expectations. Such statements are subject to certain known and unknown risks and uncertainties, many of which are difficult to predict and generally beyond AMD's control, that could cause actual results and other future events to differ materially from those expressed", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "in, or implied or projected by, the forward-looking information and statements. Investors are urged to review in detail the risks and uncertainties in AMD\u2019s Securities and Exchange Commission filings, including but not limited to AMD\u2019s most recent reports on Forms 10-K and 10-Q. AMD does not assume, and hereby disclaims, any obligation to update forward-looking statements made in this blog, except as may be required by law.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Endnotes", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "1. MI100D-01 SuperBench v0.5 model training results based on AMD internal testing as of 11/09/2022 measuring the total training throughput, at half precision, using a 2P AMD EPYC\u2122 7763 CPU server tested with 1x AMD Instinct\u2122 MI100 (32GB HBM2e) 300W GPU, SBIOS 2.2, Ubuntu\u00ae 20.04.5 LTS, host ROCm\u2122 5.2.0, guest ROCm 4.2, PyTorch 1.7.0. Server manufacturers may vary configurations, yielding different results. Performance may vary based factors including use of latest drivers and optimizations.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "2. MI200D-01 SuperBench v0.6 model training results based on AMD internal testing as of 11/09/2022 measuring the total training throughput, at half precision, using a 2P AMD EPYC\u2122 7763 CPU server tested with 1x AMD Instinct\u2122 MI210 (64GB HBM2e) 300W GPU, SBIOS 2.2, Ubuntu 20.04.5 LTS, host ROCm 5.3.0, guest ROCm 5.3, PyTorch 1.12. Server manufacturers may vary configurations, yielding different results. Performance may vary based factors including use of latest drivers and optimizations.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "3. MI200D-02: SuperBench v0.6 model training results based on AMD internal testing as of 11/09/2022 measuring the total training throughput, at half precision, using a 2P AMD EPYC\u2122\ufe0f 7763 CPU server tested with 1x AMD Instinct\u2122\ufe0f MI250 (128GB HBM2e) 560W GPU, SBIOS M12, Ubuntu 20.04 LTS, host ROCm 5.3.0, guest ROCm 5.3, PyTorch 1.12. Server manufacturers may vary configurations, yielding different results. Performance may vary based factors including use of latest drivers and optimizations.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"Introducing PyTorch Fully Sharded Data Parallel (FSDP) API\"\nauthor: Yanli Zhao, Rohan Varma, Chien-Chin Huang, Shen Li, Min Xu, Alban Desmaison\nfeatured-img: \"assets/images/pytorch-logo.jpg\"\n---", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "Recent studies have shown that large model training will be beneficial for improving model quality. During the last 3 years, model size grew 10,000 times from [BERT](https://arxiv.org/abs/1810.04805) with 110M parameters to [Megatron-2](https://arxiv.org/abs/2104.04473) with one trillion. However, training large AI models is not easy\u2014aside from the need for large amounts of computing resources, software engineering complexity is also challenging. PyTorch has been working on building tools and infrastructure", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "to make it easier.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch Distributed data parallelism is a staple of scalable deep learning because of its robustness and simplicity. It however requires the model to fit on one GPU. Recent approaches like DeepSpeed ZeRO and FairScale\u2019s Fully Sharded Data Parallel allow us to break this barrier by sharding a model\u2019s parameters, gradients and optimizer states across data parallel workers while still maintaining the simplicity of data parallelism.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "With PyTorch 1.11 we\u2019re adding native support for Fully Sharded Data Parallel (FSDP), currently available as a prototype feature. Its implementation heavily borrows from FairScale\u2019s version while bringing more streamlined APIs and additional performance improvements.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "Scaling tests of PyTorch FSDP on AWS show it can scale up to train dense models with 1T parameters. Realized performance in our experiments reached 84 TFLOPS per A100 GPU for GPT 1T model and 159 TFLOPS per A100 GPU for GPT 175B model on AWS cluster. Native FSDP implementation also dramatically improved model initialization time compared to FairScale\u2019s original when CPU offloading was enabled.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "In future PyTorch versions, we\u2019re going to enable users to seamlessly switch between DDP, ZeRO-1, ZeRO-2 and FSDP flavors of data parallelism, so that users can train different scales of models with simple configurations in the unified API.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "How FSDP Works\n\nFSDP is a type of data-parallel training, but unlike traditional data-parallel, which maintains a per-GPU copy of a model\u2019s parameters, gradients and optimizer states, it shards all of these states across data-parallel workers and can optionally offload the sharded model parameters to CPUs. \n\nThe figure below shows how FSDP works for 2 data-parallel processes:\n\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "\nFigure 1. FSDP workflow\n
", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "Usually, model layers are wrapped with FSDP in a nested way, so that only layers in a single FSDP instance need to gather the full parameters to a single device during forward or backward computations. The gathered full parameters will be freed immediately after computation, and the freed memory can be used for the next layer\u2019s computation. In this way, peak GPU memory could be saved and thus training can be scaled to use a larger model size or larger batch size. To further maximize memory efficiency, FSDP", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "can offload the parameters, gradients and optimizer states to CPUs when the instance is not active in the computation.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "Using FSDP in PyTorch\n\nThere are two ways to wrap a model with PyTorch FSDP. Auto wrapping is a drop-in replacement for DDP; manual wrapping needs minimal changes of model definition code with the ability to explore complex sharding strategies.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "Auto Wrapping\n\nModel layers should be wrapped in FSDP in a nested way to save peak memory and enable communication and computation overlapping. The simplest way to do it is auto wrapping, which can serve as a drop-in replacement for DDP without changing the rest of the code.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "fsdp_auto_wrap_policy argument allows specifying a callable function to recursively wrap layers with FSDP. default_auto_wrap_policy function provided by the PyTorch FSDP recursively wraps layers with the number of parameters larger than 100M. You can supply your own wrapping policy as needed. The example of writing a customized wrapping policy is shown in the [FSDP API doc](https://pytorch.org/docs/stable/fsdp.html).", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "In addition, cpu_offload could be configured optionally to offload wrapped parameters to CPUs when these parameters are not used in computation. This can further improve memory efficiency at the cost of data transfer overhead between host and device.\n\nThe example below shows how FSDP is wrapped using auto wrapping.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "```python\nfrom torch.distributed.fsdp import (\n FullyShardedDataParallel,\n CPUOffload,\n)\nfrom torch.distributed.fsdp.wrap import (\n default_auto_wrap_policy,\n)\nimport torch.nn as nn\n \nclass model(nn.Module):\n def __init__(self):\n super().__init__()\n self.layer1 = nn.Linear(8, 4)\n self.layer2 = nn.Linear(4, 16)\n self.layer3 = nn.Linear(16, 4)\n \nmodel = DistributedDataParallel(model())\nfsdp_model = FullyShardedDataParallel(\n model(),", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "fsdp_auto_wrap_policy=default_auto_wrap_policy,\n cpu_offload=CPUOffload(offload_params=True),\n)\n```", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "Manual Wrapping\n\nManual wrapping can be useful to explore complex sharding strategies by applying `wrap` selectively to some parts of the model. Overall settings can be passed to the enable_wrap() context manager.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "```python\nfrom torch.distributed.fsdp import (\n FullyShardedDataParallel,\n CPUOffload,\n)\nfrom torch.distributed.fsdp.wrap import (\n enable_wrap,\n wrap,\n)\nimport torch.nn as nn\nfrom typing import Dict\n \n \nclass model(nn.Module):\n def __init__(self):\n super().__init__()\n self.layer1 = wrap(nn.Linear(8, 4))\n self.layer2 = nn.Linear(4, 16)\n self.layer3 = wrap(nn.Linear(16, 4))\n \nwrapper_kwargs = Dict(cpu_offload=CPUOffload(offload_params=True))", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "with enable_wrap(wrapper_cls=FullyShardedDataParallel, **wrapper_kwargs):\n fsdp_model = wrap(model())", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "After wrapping the model with FSDP using one of the two above approaches, the model can be trained in a similar way as local training, like this:\n\n```python\noptim = torch.optim.Adam(fsdp_model.parameters(), lr=0.0001)\nfor sample, label in next_batch():\n out = fsdp_model(input)\n loss = criterion(out, label)\n loss.backward()\n optim.step()\n```", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "Benchmark Results\n\nWe ran extensive scaling tests for 175B and 1T GPT models on AWS clusters using PyTorch FSDP. Each cluster node is an instance with 8 [NVIDIA A100-SXM4-40GB](https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/a100/pdf/nvidia-a100-datasheet-us-nvidia-1758950-r4-web.pdf) GPUs, and inter-nodes are connected via AWS Elastic Fabric Adapter (EFA) with 400 Gbps network bandwidth.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "GPT models are implemented using [minGPT](https://github.com/karpathy/minGPT). A randomly generated input dataset is used for benchmarking purposes. All experiments ran with 50K vocabulary size, fp16 precision and [SGD](https://pytorch.org/docs/stable/generated/torch.optim.SGD.html) optimizer.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "| Model | Number of layers | Hidden size | Attention heads | Model size, billions of parameters |\n|----------|------------------|-------------|-----------------|------------------------------------|\n| GPT 175B | 96 | 12288 | 96 | 175 |\n| GPT 1T | 128 | 25600 | 160 | 1008 |", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "In addition to using FSDP with parameters CPU offloading in the experiments, the [activation checkpointing feature](https://pytorch.org/docs/stable/checkpoint.html) in PyTorch is also applied in the tests.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "The maximum per-GPU throughput of 159 teraFLOP/s (51% of NVIDIA A100 peak theoretical performance 312 teraFLOP/s/GPU) is achieved with batch size 20 and sequence length 512 on 128 GPUs for the GPT 175B model; further increase of the number of GPUs leads to per-GPU throughput degradation because of growing communication between the nodes.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "For the GPT 1T model, the maximum per-GPU throughput of 84 teraFLOP/s (27% of the peak teraFLOP/s) is achieved with batch size 4 and sequence length 2048 on 128 GPUs. However, further increase of the number of GPUs doesn\u2019t affect the per-GPU throughput too much because we observed that the largest bottleneck in the 1T model training is not from communication but from the slow CUDA cache allocator when peak GPU memory is reaching the limit. The use of A100 80G GPUs with larger memory capacity will mostly", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "resolve this issue and also help scale the batch size to achieve much larger throughput.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "Future Work", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "In the next beta release, we are planning to add efficient distributed model/states checkpointing APIs, meta device support for large model materialization, and mixed-precision support inside FSDP computation and communication. We\u2019re also going to make it easier to switch between [DDP](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html), [ZeRO1, ZeRO2](https://arxiv.org/abs/1910.02054) and FSDP flavors of data parallelism in the new API. To further improve FSDP performance, memory fragmentation", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "reduction and communication efficiency improvements are also planned.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "A Bit of History of 2 Versions of FSDP\n\n[FairScale FSDP](https://engineering.fb.com/2021/07/15/open-source/fsdp/) was released in early 2021 as part of the FairScale library. And then we started the effort to upstream FairScale FSDP to PyTorch in PT 1.11, making it production-ready. We have selectively upstreamed and refactored key features from FairScale FSDP, redesigned user interfaces and made performance improvements.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "In the near future, FairScale FSDP will stay in the FairScale repository for research projects, while generic and widely adopted features will be upstreamed to PyTorch incrementally and hardened accordingly.\n\nMeanwhile, PyTorch FSDP will focus more on production readiness and long-term support. This includes better integration with ecosystems and improvements on performance, usability, reliability, debuggability and composability.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "Acknowledgments", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "We would like to thank the authors of FairScale FSDP: Myle Ott, Sam Shleifer, Min Xu, Priya Goyal, Quentin Duval, Vittorio Caggiano, Tingting Markstrum, Anjali Sridhar. Thanks to the Microsoft DeepSpeed ZeRO team for developing and popularizing sharded data parallel techniques. Thanks to Pavel Belevich, Jessica Choi, Sisil Mehta for running experiments using PyTorch FSDP on different clusters. Thanks to Geeta Chauhan, Mahesh Yadav, Pritam Damania, Dmytro Dzhulgakov for supporting this effort and insightful", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "discussions.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch library updates including new model serving library '\nauthor: Team PyTorch\n---", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "Along with the PyTorch 1.5 release, we are announcing new libraries for high-performance PyTorch model serving and tight integration with TorchElastic and Kubernetes. Additionally, we are releasing updated packages for torch_xla (Google Cloud TPUs), torchaudio, torchvision, and torchtext. All of these new libraries and enhanced capabilities are available today and accompany all of the core features [released in PyTorch 1.5](https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis).", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "TorchServe (Experimental)", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
+{"page_content": "RCCL (pronounced \"Rickle\") is a stand-alone library of standard collective communication routines for GPUs, implementing all-reduce, all-gather, reduce, broadcast, reduce-scatter, gather, scatter, and all-to-all. There is support for direct GPU-to-GPU send and receive operations. It has been optimized to achieve high bandwidth on platforms using PCIe\u00ae, Infinity Fabric\u2122 (GPU to GPU) as well as networking using InfiniBand Verbs or TCP/IP sockets. RCCL supports an arbitrary number of GPUs installed in single or multiple nodes and can be used in either single- or multi-process (e.g., MPI) applications.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Along with the above key highlights, over 50 features and functionality improvements were completed jointly between AMD and PyTorch to add stable support for ROCm. These include improvements to tools, compilers, runtime, graph optimizations through TorchScript, INT8 quant path usage, and [ONNX runtime integration](https://onnxruntime.ai/) including support for Navi 21 based Radeon\u2122 PRO datacenter graphics card to name a few.\n\n## [AITemplate](https://github.com/facebookincubator/AITemplate) Inference Engine", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "MetaAI recently published a blog announcing the release of its open source AITemplate ([link](https://ai.facebook.com/blog/gpu-inference-engine-nvidia-amd-open-source/)) for a unified inference system supporting AMD Instinct GPU accelerators using the AMD ROCm stack. This Python based framework can help significantly improve performance through increased utilization of AMD matrix cores for transformer blocks. This is achieved through the AMD [Composable Kernel (CK) library](https://github.com/ROCmSoftwarePlatform/composable_kernel) which provides performance critical Kernels for ML AI workloads across multiple architectures including GPUs and CPUs through HIP & C++.\n\nMoreover, the AITemplate also provides out-of-the-box support for widely used AI models like BERT, ResNET, Vision Transformer, Stable Diffusion etc. simplifying deployment process through these pretrained models.\n\n \n## What\u2019s coming with future ROCm releases?\n\n### Unified memory models for CPU + GPU", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "As system architecture evolves to address the complexity of large problem sizes and data sets, memory management becomes a key performance bottle neck that needs a cohesive strategy to be addressed through innovations at both hardware and software levels. AMD is uniquely positioned to address this problem with its effective data center solutions integrating AMD EPYC\u2122 CPU cores with its AMD Instinct GPU compute units in a truly unified datacenter APU (Accelerated Processing Unit) form factor set to be launched in 2H 2023.\n\nThe software work to leverage the unified CPU + GPU memory has already started in collaboration with the PyTorch team, to enable the usage of a fast, low latency, synchronized memory model that enables not only AMD but also other AI accelerators to address the complex memory management problem of today. We are looking forward to this joint effort and announcement soon.\n\n## Acknowledgement", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "## Acknowledgement\n\nThe content in this blog highlights the joint work between AMD and key PyTorch contributors including Meta, working on many of the core features, as well as Microsoft enabling ONNX Runtime support. We are looking forward to working with the other founding members at the PyTorch Foundation on the next steps and improvements to democratize and grow adoption of PyTorch across the industry.\n\n## CAUTIONARY STATEMENT", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "\nThis blog contains forward-looking statements concerning Advanced Micro Devices, Inc. (AMD) such as the availability, timing and expected benefits of an AMD datacenter APU form factor, which are made pursuant to the Safe Harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are commonly identified by words such as \"would,\" \"may,\" \"expects,\" \"believes,\" \"plans,\" \"intends,\" \"projects\" and other terms with similar meaning. Investors are cautioned that the forward-looking statements in this blog are based on current beliefs, assumptions and expectations, speak only as of the date of this blog and involve risks and uncertainties that could cause actual results to differ materially from current expectations. Such statements are subject to certain known and unknown risks and uncertainties, many of which are difficult to predict and generally beyond AMD's control, that could cause actual results and other future events to differ materially from those expressed in, or implied or projected by, the forward-looking information and statements. Investors are urged to review in detail the risks and uncertainties in AMD\u2019s Securities and Exchange Commission filings, including but not limited to AMD\u2019s most recent reports on Forms 10-K and 10-Q. AMD does not assume, and hereby disclaims, any obligation to update forward-looking statements made in this blog, except as may be required by law. \n", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "## Endnotes", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "1. MI100D-01 SuperBench v0.5 model training results based on AMD internal testing as of 11/09/2022 measuring the total training throughput, at half precision, using a 2P AMD EPYC\u2122 7763 CPU server tested with 1x AMD Instinct\u2122 MI100 (32GB HBM2e) 300W GPU, SBIOS 2.2, Ubuntu\u00ae 20.04.5 LTS, host ROCm\u2122 5.2.0, guest ROCm 4.2, PyTorch 1.7.0. Server manufacturers may vary configurations, yielding different results. Performance may vary based factors including use of latest drivers and optimizations.\n2. MI200D-01 SuperBench v0.6 model training results based on AMD internal testing as of 11/09/2022 measuring the total training throughput, at half precision, using a 2P AMD EPYC\u2122 7763 CPU server tested with 1x AMD Instinct\u2122 MI210 (64GB HBM2e) 300W GPU, SBIOS 2.2, Ubuntu 20.04.5 LTS, host ROCm 5.3.0, guest ROCm 5.3, PyTorch 1.12. Server manufacturers may vary configurations, yielding different results. Performance may vary based factors including use of latest drivers and optimizations.\n3. MI200D-02: SuperBench v0.6 model training results based on AMD internal testing as of 11/09/2022 measuring the total training throughput, at half precision, using a 2P AMD EPYC\u2122\ufe0f 7763 CPU server tested with 1x AMD Instinct\u2122\ufe0f MI250 (128GB HBM2e) 560W GPU, SBIOS M12, Ubuntu 20.04 LTS, host ROCm 5.3.0, guest ROCm 5.3, PyTorch 1.12. Server manufacturers may vary configurations, yielding different results. Performance may vary based factors including use of latest drivers and optimizations.", "metadata": {"source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"Introducing PyTorch Fully Sharded Data Parallel (FSDP) API\"\nauthor: Yanli Zhao, Rohan Varma, Chien-Chin Huang, Shen Li, Min Xu, Alban Desmaison\nfeatured-img: \"assets/images/pytorch-logo.jpg\"\n---\n\nRecent studies have shown that large model training will be beneficial for improving model quality. During the last 3 years, model size grew 10,000 times from [BERT](https://arxiv.org/abs/1810.04805) with 110M parameters to [Megatron-2](https://arxiv.org/abs/2104.04473) with one trillion. However, training large AI models is not easy\u2014aside from the need for large amounts of computing resources, software engineering complexity is also challenging. PyTorch has been working on building tools and infrastructure to make it easier.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
+{"page_content": "PyTorch Distributed data parallelism is a staple of scalable deep learning because of its robustness and simplicity. It however requires the model to fit on one GPU. Recent approaches like DeepSpeed ZeRO and FairScale\u2019s Fully Sharded Data Parallel allow us to break this barrier by sharding a model\u2019s parameters, gradients and optimizer states across data parallel workers while still maintaining the simplicity of data parallelism.\n\nWith PyTorch 1.11 we\u2019re adding native support for Fully Sharded Data Parallel (FSDP), currently available as a prototype feature. Its implementation heavily borrows from FairScale\u2019s version while bringing more streamlined APIs and additional performance improvements.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
+{"page_content": "Scaling tests of PyTorch FSDP on AWS show it can scale up to train dense models with 1T parameters. Realized performance in our experiments reached 84 TFLOPS per A100 GPU for GPT 1T model and 159 TFLOPS per A100 GPU for GPT 175B model on AWS cluster. Native FSDP implementation also dramatically improved model initialization time compared to FairScale\u2019s original when CPU offloading was enabled.\n\nIn future PyTorch versions, we\u2019re going to enable users to seamlessly switch between DDP, ZeRO-1, ZeRO-2 and FSDP flavors of data parallelism, so that users can train different scales of models with simple configurations in the unified API.\n\n### How FSDP Works\n\nFSDP is a type of data-parallel training, but unlike traditional data-parallel, which maintains a per-GPU copy of a model\u2019s parameters, gradients and optimizer states, it shards all of these states across data-parallel workers and can optionally offload the sharded model parameters to CPUs. \n\nThe figure below shows how FSDP works for 2 data-parallel processes:", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
\n\n\nFigure 1. FSDP workflow\n
\n\nUsually, model layers are wrapped with FSDP in a nested way, so that only layers in a single FSDP instance need to gather the full parameters to a single device during forward or backward computations. The gathered full parameters will be freed immediately after computation, and the freed memory can be used for the next layer\u2019s computation. In this way, peak GPU memory could be saved and thus training can be scaled to use a larger model size or larger batch size. To further maximize memory efficiency, FSDP can offload the parameters, gradients and optimizer states to CPUs when the instance is not active in the computation.\n\n### Using FSDP in PyTorch\n\nThere are two ways to wrap a model with PyTorch FSDP. Auto wrapping is a drop-in replacement for DDP; manual wrapping needs minimal changes of model definition code with the ability to explore complex sharding strategies.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
+{"page_content": "#### Auto Wrapping\n\nModel layers should be wrapped in FSDP in a nested way to save peak memory and enable communication and computation overlapping. The simplest way to do it is auto wrapping, which can serve as a drop-in replacement for DDP without changing the rest of the code.\n\nfsdp_auto_wrap_policy argument allows specifying a callable function to recursively wrap layers with FSDP. default_auto_wrap_policy function provided by the PyTorch FSDP recursively wraps layers with the number of parameters larger than 100M. You can supply your own wrapping policy as needed. The example of writing a customized wrapping policy is shown in the [FSDP API doc](https://pytorch.org/docs/stable/fsdp.html).\n\nIn addition, cpu_offload could be configured optionally to offload wrapped parameters to CPUs when these parameters are not used in computation. This can further improve memory efficiency at the cost of data transfer overhead between host and device.\n\nThe example below shows how FSDP is wrapped using auto wrapping.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
+{"page_content": "```python\nfrom torch.distributed.fsdp import (\n FullyShardedDataParallel,\n CPUOffload,\n)\nfrom torch.distributed.fsdp.wrap import (\n default_auto_wrap_policy,\n)\nimport torch.nn as nn\n \nclass model(nn.Module):\n def __init__(self):\n super().__init__()\n self.layer1 = nn.Linear(8, 4)\n self.layer2 = nn.Linear(4, 16)\n self.layer3 = nn.Linear(16, 4)\n \nmodel = DistributedDataParallel(model())\nfsdp_model = FullyShardedDataParallel(\n model(),\n fsdp_auto_wrap_policy=default_auto_wrap_policy,\n cpu_offload=CPUOffload(offload_params=True),\n)\n```\n\n#### Manual Wrapping\n\nManual wrapping can be useful to explore complex sharding strategies by applying `wrap` selectively to some parts of the model. Overall settings can be passed to the enable_wrap() context manager.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
+{"page_content": "```python\nfrom torch.distributed.fsdp import (\n FullyShardedDataParallel,\n CPUOffload,\n)\nfrom torch.distributed.fsdp.wrap import (\n enable_wrap,\n wrap,\n)\nimport torch.nn as nn\nfrom typing import Dict\n \n \nclass model(nn.Module):\n def __init__(self):\n super().__init__()\n self.layer1 = wrap(nn.Linear(8, 4))\n self.layer2 = nn.Linear(4, 16)\n self.layer3 = wrap(nn.Linear(16, 4))\n \nwrapper_kwargs = Dict(cpu_offload=CPUOffload(offload_params=True))\nwith enable_wrap(wrapper_cls=FullyShardedDataParallel, **wrapper_kwargs):\n fsdp_model = wrap(model())\n```\n\nAfter wrapping the model with FSDP using one of the two above approaches, the model can be trained in a similar way as local training, like this:\n\n```python\noptim = torch.optim.Adam(fsdp_model.parameters(), lr=0.0001)\nfor sample, label in next_batch():\n out = fsdp_model(input)\n loss = criterion(out, label)\n loss.backward()\n optim.step()\n```\n\n### Benchmark Results", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
+{"page_content": "We ran extensive scaling tests for 175B and 1T GPT models on AWS clusters using PyTorch FSDP. Each cluster node is an instance with 8 [NVIDIA A100-SXM4-40GB](https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/a100/pdf/nvidia-a100-datasheet-us-nvidia-1758950-r4-web.pdf) GPUs, and inter-nodes are connected via AWS Elastic Fabric Adapter (EFA) with 400 Gbps network bandwidth.\n\nGPT models are implemented using [minGPT](https://github.com/karpathy/minGPT). A randomly generated input dataset is used for benchmarking purposes. All experiments ran with 50K vocabulary size, fp16 precision and [SGD](https://pytorch.org/docs/stable/generated/torch.optim.SGD.html) optimizer.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
+{"page_content": "| Model | Number of layers | Hidden size | Attention heads | Model size, billions of parameters |\n|----------|------------------|-------------|-----------------|------------------------------------|\n| GPT 175B | 96 | 12288 | 96 | 175 |\n| GPT 1T | 128 | 25600 | 160 | 1008 |\n\nIn addition to using FSDP with parameters CPU offloading in the experiments, the [activation checkpointing feature](https://pytorch.org/docs/stable/checkpoint.html) in PyTorch is also applied in the tests.\n\nThe maximum per-GPU throughput of 159 teraFLOP/s (51% of NVIDIA A100 peak theoretical performance 312 teraFLOP/s/GPU) is achieved with batch size 20 and sequence length 512 on 128 GPUs for the GPT 175B model; further increase of the number of GPUs leads to per-GPU throughput degradation because of growing communication between the nodes.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
+{"page_content": "For the GPT 1T model, the maximum per-GPU throughput of 84 teraFLOP/s (27% of the peak teraFLOP/s) is achieved with batch size 4 and sequence length 2048 on 128 GPUs. However, further increase of the number of GPUs doesn\u2019t affect the per-GPU throughput too much because we observed that the largest bottleneck in the 1T model training is not from communication but from the slow CUDA cache allocator when peak GPU memory is reaching the limit. The use of A100 80G GPUs with larger memory capacity will mostly resolve this issue and also help scale the batch size to achieve much larger throughput.\n\n\n
\n
\n\n\n
\n
\n\n### Future Work", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
+{"page_content": "### Future Work\n\nIn the next beta release, we are planning to add efficient distributed model/states checkpointing APIs, meta device support for large model materialization, and mixed-precision support inside FSDP computation and communication. We\u2019re also going to make it easier to switch between [DDP](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html), [ZeRO1, ZeRO2](https://arxiv.org/abs/1910.02054) and FSDP flavors of data parallelism in the new API. To further improve FSDP performance, memory fragmentation reduction and communication efficiency improvements are also planned.\n\n### A Bit of History of 2 Versions of FSDP", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
+{"page_content": "[FairScale FSDP](https://engineering.fb.com/2021/07/15/open-source/fsdp/) was released in early 2021 as part of the FairScale library. And then we started the effort to upstream FairScale FSDP to PyTorch in PT 1.11, making it production-ready. We have selectively upstreamed and refactored key features from FairScale FSDP, redesigned user interfaces and made performance improvements.\n\nIn the near future, FairScale FSDP will stay in the FairScale repository for research projects, while generic and widely adopted features will be upstreamed to PyTorch incrementally and hardened accordingly.\n\nMeanwhile, PyTorch FSDP will focus more on production readiness and long-term support. This includes better integration with ecosystems and improvements on performance, usability, reliability, debuggability and composability.\n\n### Acknowledgments", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
+{"page_content": "### Acknowledgments\n\nWe would like to thank the authors of FairScale FSDP: Myle Ott, Sam Shleifer, Min Xu, Priya Goyal, Quentin Duval, Vittorio Caggiano, Tingting Markstrum, Anjali Sridhar. Thanks to the Microsoft DeepSpeed ZeRO team for developing and popularizing sharded data parallel techniques. Thanks to Pavel Belevich, Jessica Choi, Sisil Mehta for running experiments using PyTorch FSDP on different clusters. Thanks to Geeta Chauhan, Mahesh Yadav, Pritam Damania, Dmytro Dzhulgakov for supporting this effort and insightful discussions.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch library updates including new model serving library '\nauthor: Team PyTorch\n---\n\n\nAlong with the PyTorch 1.5 release, we are announcing new libraries for high-performance PyTorch model serving and tight integration with TorchElastic and Kubernetes. Additionally, we are releasing updated packages for torch_xla (Google Cloud TPUs), torchaudio, torchvision, and torchtext. All of these new libraries and enhanced capabilities are available today and accompany all of the core features [released in PyTorch 1.5](https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis). \n\n## TorchServe (Experimental)", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
{"page_content": "TorchServe is a flexible and easy to use library for serving PyTorch models in production performantly at scale. It is cloud and environment agnostic and supports features such as multi-model serving, logging, metrics, and the creation of RESTful endpoints for application integration. TorchServe was jointly developed by engineers from Facebook and AWS with feedback and engagement from the broader PyTorch community. The experimental release of TorchServe is available today. Some of the highlights include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "* Support for both Python-based and TorchScript-based models\n* Default handlers for common use cases (e.g., image segmentation, text classification) as well as the ability to write custom handlers for other use cases\n* Model versioning, the ability to run multiple versions of a model at the same time, and the ability to roll back to an earlier version", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "* The ability to package a model, learning weights, and supporting files (e.g., class mappings, vocabularies) into a single, persistent artifact (a.k.a. the \u201cmodel archive\u201d)\n* Robust management capability, allowing full configuration of models, versions, and individual worker threads via command line, config file, or run-time API\n* Automatic batching of individual inferences across HTTP requests\n* Logging including common metrics, and the ability to incorporate custom metrics", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "* Ready-made Dockerfile for easy deployment\n* HTTPS support for secure deployment", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "To learn more about the APIs and the design of this feature, see the links below:\n* See for a full multi-node deployment reference architecture.\n* The full documentation can be found [here](https://pytorch.org/serve).", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "TorchElastic integration with Kubernetes (Experimental)", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "[TorchElastic](https://github.com/pytorch/elastic) is a proven library for training large scale deep neural networks at scale within companies like Facebook, where having the ability to dynamically adapt to server availability and scale as new compute resources come online is critical. Kubernetes enables customers using machine learning frameworks like PyTorch to run training jobs distributed across fleets of powerful GPU instances like the Amazon EC2 P3. Distributed training jobs, however, are not", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "fault-tolerant, and a job cannot continue if a node failure or reclamation interrupts training. Further, jobs cannot start without acquiring all required resources, or scale up and down without being restarted. This lack of resiliency and flexibility results in increased training time and costs from idle resources. TorchElastic addresses these limitations by enabling distributed training jobs to be executed in a fault-tolerant and elastic manner. Until today, Kubernetes users needed to manage Pods and", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "Services required for TorchElastic training jobs manually.", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "Through the joint collaboration of engineers at Facebook and AWS, TorchElastic, adding elasticity and fault tolerance, is now supported using vanilla Kubernetes and through the managed EKS service from AWS.\n\nTo learn more see the [TorchElastic repo](http://pytorch.org/elastic/0.2.0rc0/kubernetes.html) for the controller implementation and docs on how to use it.", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "torch_xla 1.5 now available", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "[torch_xla](http://pytorch.org/xla/) is a Python package that uses the [XLA linear algebra compiler](https://www.tensorflow.org/xla) to accelerate the [PyTorch deep learning framework](https://pytorch.org/) on [Cloud TPUs](https://cloud.google.com/tpu/) and [Cloud TPU Pods](https://cloud.google.com/tpu/docs/tutorials/pytorch-pod). torch_xla aims to give PyTorch users the ability to do everything they can do on GPUs on Cloud TPUs as well while minimizing changes to the user experience. The project began with", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "a conversation at NeurIPS 2017 and gathered momentum in 2018 when teams from Facebook and Google came together to create a proof of concept. We announced this collaboration at PTDC 2018 and made the PyTorch/XLA integration broadly available at PTDC 2019. The project already has 28 contributors, nearly 2k commits, and a repo that has been forked more than 100 times.", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "This release of [torch_xla](http://pytorch.org/xla/) is aligned and tested with PyTorch 1.5 to reduce friction for developers and to provide a stable and mature PyTorch/XLA stack for training models using Cloud TPU hardware. You can [try it for free](https://medium.com/pytorch/get-started-with-pytorch-cloud-tpus-and-colab-a24757b8f7fc) in your browser on an 8-core Cloud TPU device with [Google Colab](https://colab.research.google.com/), and you can use it at a much larger scaleon [Google", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "Cloud](https://cloud.google.com/gcp).", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "See the full torch_xla release notes [here](https://github.com/pytorch/xla/releases). Full docs and tutorials can be found [here](https://pytorch.org/xla/) and [here](https://cloud.google.com/tpu/docs/tutorials).", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch Domain Libraries\n\ntorchaudio, torchvision, and torchtext complement PyTorch with common datasets, models, and transforms in each domain area. We\u2019re excited to share new releases for all three domain libraries alongside PyTorch 1.5 and the rest of the library updates. For this release, all three domain libraries are removing support for Python2 and will support Python3 only.", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "torchaudio 0.5\nThe torchaudio 0.5 release includes new transforms, functionals, and datasets. Highlights for the release include:\n\n* Added the Griffin-Lim functional and transform, `InverseMelScale` and `Vol` transforms, and `DB_to_amplitude`. \n* Added support for `allpass`, `fade`, `bandpass`, `bandreject`, `band`, `treble`, `deemph`, and `riaa` filters and transformations.\n* New datasets added including `LJSpeech` and `SpeechCommands` datasets.", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "See the release full notes [here](https://github.com/pytorch/audio/releases) and full docs can be found [here](https://pytorch.org/audio/).", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "torchvision 0.6\nThe torchvision 0.6 release includes updates to datasets, models and a significant number of bug fixes. Highlights include:\n\n* Faster R-CNN now supports negative samples which allows the feeding of images without annotations at training time.\n* Added `aligned` flag to `RoIAlign` to match Detectron2. \n* Refactored abstractions for C++ video decoder", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "See the release full notes [here](https://github.com/pytorch/vision/releases) and full docs can be found [here](https://pytorch.org/docs/stable/torchvision/index.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "torchtext 0.6\nThe torchtext 0.6 release includes a number of bug fixes and improvements to documentation. Based on user's feedback, dataset abstractions are currently being redesigned also. Highlights for the release include:\n\n* Fixed an issue related to the SentencePiece dependency in conda package.\n* Added support for the experimental IMDB dataset to allow a custom vocab.\n* A number of documentation updates including adding a code of conduct and a deduplication of the docs on the torchtext site.", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "Your feedback and discussions on the experimental datasets API are welcomed. You can send them to [issue #664](https://github.com/pytorch/text/issues/664). We would also like to highlight the pull request [here](https://github.com/pytorch/text/pull/701) where the latest dataset abstraction is applied to the text classification datasets. The feedback can be beneficial to finalizing this abstraction.", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "See the release full notes [here](https://github.com/pytorch/text/releases) and full docs can be found [here](https://pytorch.org/text/).\n\n\n*We\u2019d like to thank the entire PyTorch team, the Amazon team and the community for all their contributions to this work.*\n\nCheers!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"Understanding LazyTensor System Performance with PyTorch/XLA on Cloud TPU\"\nauthor: Vaibhav Singh\nfeatured-img: \"\"\n---", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "Introduction\n\nEase of use, expressivity, and debuggability are among the core principles of PyTorch. One of the key drivers for the ease of use is that PyTorch execution is by default \u201ceager, i.e. op by op execution preserves the imperative nature of the program. However, eager execution does not offer the compiler based optimization, for example, the optimizations when the computation can be expressed as a graph.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "LazyTensor [[1]], first introduced with PyTorch/XLA, helps combine these seemingly disparate approaches. While PyTorch eager execution is widely used, intuitive, and well understood, lazy execution is not as prevalent yet.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "In this post we will explore some of the basic concepts of the LazyTensor System with the goal of applying these concepts to understand and debug performance of LazyTensor based implementations in PyTorch. Although we will use PyTorch/XLA on Cloud TPU as the vehicle for exploring these concepts, we hope that these ideas will be useful to understand other system(s) built on LazyTensors.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "LazyTensor\n\nAny operation performed on a PyTorch tensor is by default dispatched as a kernel or a composition of kernels to the underlying hardware. These kernels are executed asynchronously on the underlying hardware. The program execution is not blocked until the value of a tensor is fetched. This approach scales extremely well with massively parallel programmed hardware such as GPUs.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "The starting point of a LazyTensor system is a custom tensor type. In PyTorch/XLA, this type is called XLA tensor. In contrast to PyTorch\u2019s native tensor type, operations performed on XLA tensors are recorded into an IR graph. Let\u2019s examine an example that sums the product of two tensors:\n\n```python\nimport torch\nimport torch_xla\nimport torch_xla.core.xla_model as xm\n\ndev = xm.xla_device()\n\nx1 = torch.rand((3, 3)).to(dev)\nx2 = torch.rand((3, 8)).to(dev)", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "y1 = torch.einsum('bs,st->bt', x1, x2)\nprint(torch_xla._XLAC._get_xla_tensors_text([y1]))", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "You can execute [this](https://github.com/ultrons/xla/blob/lazy-tensor-post/contrib/colab/LazyTensor_Basics.ipynb) colab notebook to examine the resulting graph for y1. Notice that no computation has been performed yet.\n\n```python\ny1 = y1 + x2\nprint(torch_xla._XLAC._get_xla_tensors_text([y1]))", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "The operations will continue until PyTorch/XLA encounters a barrier. This barrier can either be a [mark step()](https://github.com/pytorch/xla/blob/ff079bb48744e5aa6696201ccf34057f15fc7cac/torch_xla/core/xla_model.py#L751) api call or any other event which forces the execution of the graph recorded so far.\n\n```python\nxm.mark_step()\nprint(torch_xla._XLAC._get_xla_tensors_text([y1]))", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "Once the mark_step() is called, the graph is compiled and then executed on TPU, i.e. the tensors have been materialized. Therefore, the graph is now reduced to a single line y1 tensor which holds the result of the computation.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "Compile Once, Execute Often", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "XLA compilation passes offer optimizations (e.g. op-fusion, which reduces HBM pressure by using scratch-pad memory for multiple ops, [ref](https://arxiv.org/pdf/2004.13336.pdf) ) and leverages lower level XLA infrastructure to optimally use the underlying hardware. However, there is one caveat, compilation passes are expensive, i.e. can add to the training step time. Therefore, this approach scales well if and only if we can **compile once and execute often** (compilation cache helps, such that the same", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "graph is not compiled more than once).", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "In the following example, we create a small computation graph and time the execution:\n\n```python\ny1 = torch.rand((3, 8)).to(dev)\ndef dummy_step() :\n y1 = torch.einsum('bs,st->bt', y1, x)\n xm.mark_step()\n return y1", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "```python\n%timeit dummy_step\n```\n\n```python\nThe slowest run took 29.74 times longer than the fastest. This could mean that an intermediate result is being cached.\n10000000 loops, best of 5: 34.2 ns per loop", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "You notice that the slowest step is quite longer than the fastest. This is because of the graph compilation overhead which is incurred only once for a given shape of graph, input shape, and output shape. Subsequent steps are faster because no graph compilation is necessary.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "This also implies that we expect to see performance cliffs when the \u201ccompile once and execute often\u201d assumption breaks. Understanding when this assumption breaks is the key to understanding and optimizing the performance of a LazyTensor system. Let\u2019s examine what triggers the compilation.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "Graph Compilation and Execution and LazyTensor Barrier", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "We saw that the computation graph is compiled and executed when a LazyTensor barrier is encountered. There are three scenarios when the LazyTensor barrier is automatically or manually introduced. The first is the explicit call of mark_step() api as shown in the preceding example. mark_step() is also called implicitly at every step when you wrap your dataloader with MpDeviceLoader (highly recommended to overlap compute and data upload to TPU device). The [Optimizer", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "step](https://github.com/pytorch/xla/blob/master/torch_xla/core/xla_model.py#L804) method of xla_model also allows to implicitly call mark_step (when you set barrier=True).", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "The second scenario where a barrier is introduced is when PyTorch/XLA finds an op with no mapping (lowering) to equivalent XLA HLO ops. PyTorch has [2000+](https://dev-discuss.pytorch.org/t/where-do-the-2000-pytorch-operators-come-from-more-than-you-wanted-to-know/373) operations. Although most of these operations are composite (i.e. can be expressed in terms of other fundamental operations), some of these operations do not have corresponding lowering in XLA.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "What happens when an op with no XLA lowering is used? PyTorch XLA stops the operation recording and cuts the graph(s) leading to the input(s) of the unlowered op. This cut graph is then compiled and dispatched for execution. The results (materialized tensor) of execution are sent back from device to host, the unlowered op is then executed on the host (cpu), and then downstream LazyTensor operations creating a new graph(s) until a barrier is encountered again.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "The third and final scenario which results in a LazyTensor barrier is when there is a control structure/statement or another method which requires the value of a tensor. This statement would at the minimum cause the execution of the computation graph leading to the tensor (if the graph has already been seen) or cause compilation and execution of both.\n\nOther examples of such methods include .item(), isEqual(). In general, any operation that maps Tensor -> Scalar will cause this behavior.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "Dynamic Graph\n\nAs illustrated in the preceding section, graph compilation cost is amortized if the same shape of the graph is executed many times. It\u2019s because the compiled graph is cached with a hash derived from the graph shape, input shape, and the output shape. If these shapes change it will trigger compilation, and too frequent compilation will result in training time degradation.\n\nLet\u2019s consider the following example:", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "```python\ndef dummy_step(x, y, loss, acc=False):\n z = torch.einsum('bs,st->bt', y, x)\n step_loss = z.sum().view(1,)\n if acc:\n loss = torch.cat((loss, step_loss))\n else:\n loss = step_loss\n xm.mark_step()\n return loss", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "import time\ndef measure_time(acc=False):\n exec_times = []\n iter_count = 100\n x = torch.rand((512, 8)).to(dev)\n y = torch.rand((512, 512)).to(dev)\n loss = torch.zeros(1).to(dev)\n for i in range(iter_count):\n tic = time.time()\n loss = dummy_step(x, y, loss, acc=acc)\n toc = time.time()\n exec_times.append(toc - tic)\n return exec_times", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "dyn = measure_time(acc=True) # acc= True Results in dynamic graph\nst = measure_time(acc=False) # Static graph, computation shape, inputs and output shapes don't change\n\nimport matplotlib.pyplot as plt\nplt.plot(st, label = 'static graph')\nplt.plot(dyn, label = 'dynamic graph')\nplt.legend()\nplt.title('Execution time in seconds')", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\nNote that static and dynamic cases have the same computation but dynamic graph compiles every time, leading to the higher overall run-time. In practice, the training step with recompilation can sometimes be an order of magnitude or slower. In the next section we discuss some of the PyTorch/XLA tools to debug training degradation.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "Profiling Training Performance with PyTorch/XLA", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch/XLA profiling consists of two major components. First is the client side profiling. This feature is turned on by simply setting the environment variable PT_XLA_DEBUG to 1. Client side profiling points to unlowered ops or device-to-host transfer in your source code. Client side profiling also reports if there are too frequent compilations happening during the training. You can explore some metrics and counters provided by PyTorch/XLA in conjunction with the profiler in", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "[this](https://github.com/ultrons/xla/blob/lazy-tensor-post/contrib/colab/Exploring_LazyTensor_with_Debug_Metrics.ipynb) notebook.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "The second component offered by PyTorch/XLA profiler is the inline trace annotation. For example:\n\n```python\nimport torch_xla.debug.profiler as xp", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "def train_imagenet():\n print('==> Preparing data..')\n img_dim = get_model_property('img_dim')\n ....\n server = xp.start_server(3294)\n def train_loop_fn(loader, epoch):\n ....\n model.train()\n for step, (data, target) in enumerate(loader):\n with xp.StepTrace('Train_Step', step_num=step):\n ....\n if FLAGS.amp:\n ....\n else:\n with xp.Trace('build_graph'):\n output = model(data)\n loss = loss_fn(output, target)\n loss.backward()", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "xm.optimizer_step(optimizer)", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "Notice the start_server api call. The port number that you have used here is the same port number you will use with the tensorboard profiler in order to view the op trace similar to:\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "Op trace along with the client-side debugging function is a powerful set of tools to debug and optimize your training performance with PyTorch/XLA. For more detailed instructions on the profiler usage, the reader is encouraged to explore blogs [part-1](https://cloud.google.com/blog/topics/developers-practitioners/pytorchxla-performance-debugging-tpu-vm-part-1), [part-2](https://cloud.google.com/blog/topics/developers-practitioners/pytorchxla-performance-debugging-cloud-tpu-vm-part-ii), and", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "[part-3](https://cloud.google.com/blog/topics/developers-practitioners/pytorchxla-performance-debugging-cloud-tpu-vm-part-iii) of the blog series on PyTorch/XLA performance debugging.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "Summary\n\nIn this article we have reviewed the fundamentals of the LazyTensor system. We built on those fundamentals with PyTorch/XLA to understand the potential causes of training performance degradation. We discussed why \u201ccompile once and execute often\u201d helps to get the best performance on LazyTensor systems, and why training slows down when this assumption breaks.\n\nWe hope that PyTorch users will find these insights helpful for their novel works with LazyTensor systems.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "Acknowledgements", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "A big thank you to my outstanding colleagues Jack Cao, Milad Mohammedi, Karl Weinmeister, Rajesh Thallam, Jordan Tottan (Google) and Geeta Chauhan (Meta) for their meticulous reviews and feedback. And thanks to the extended PyTorch/XLA development team from Google, Meta, and the open source community to make PyTorch possible on TPUs. And finally, thanks to the authors of the [LazyTensor paper](https://arxiv.org/pdf/2102.13267.pdf) not only for developing LazyTensor but also for writing such an accessible", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "paper.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "Refrences\n\n[[1]] LazyTensor: combining eager execution with domain-specific compilers\n\n[1]: https://arxiv.org/pdf/2102.13267.pdf", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"Extending TorchVision\u2019s Transforms to Object Detection, Segmentation & Video tasks\"\nauthor: Philip Meier, Victor Fomin, Vasilis Vryniotis, Nicolas Hug\nfeatured-img: \"assets/images/Transforms-v2-feature-image.png\"\n---\n\n**Note**: A previous version of this post was published in November 2022. We have updated this post with the most up-to-date info, in view of the upcoming 0.15 release of torchvision in March 2023, jointly with PyTorch 2.0.", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "TorchVision is extending its Transforms API! Here is what\u2019s new:\n\n- You can use them not only for Image Classification but also for Object Detection, Instance & Semantic Segmentation and Video Classification.\n- You can use new functional transforms for transforming Videos, Bounding Boxes and Segmentation Masks.", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "The API is completely backward compatible with the previous one, and remains the same to assist the migration and adoption. We are now releasing this new API as Beta in the torchvision.transforms.v2 namespace, and we would love to get early feedback from you to improve its functionality. Please [_reach out to us_](https://github.com/pytorch/vision/issues/6753) if you have any questions or suggestions.", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "Limitations of current Transforms\n\nThe existing Transforms API of TorchVision (aka V1) only supports single images. As a result it can only be used for classification tasks:\n\n```python\nfrom torchvision import transforms\ntrans = transforms.Compose([\n transforms.ColorJitter(contrast=0.5),\n transforms.RandomRotation(30),\n transforms.CenterCrop(480),\n])\nimgs = trans(imgs)", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
+{"page_content": "* Support for both Python-based and TorchScript-based models\n* Default handlers for common use cases (e.g., image segmentation, text classification) as well as the ability to write custom handlers for other use cases\n* Model versioning, the ability to run multiple versions of a model at the same time, and the ability to roll back to an earlier version\n* The ability to package a model, learning weights, and supporting files (e.g., class mappings, vocabularies) into a single, persistent artifact (a.k.a. the \u201cmodel archive\u201d)\n* Robust management capability, allowing full configuration of models, versions, and individual worker threads via command line, config file, or run-time API\n* Automatic batching of individual inferences across HTTP requests\n* Logging including common metrics, and the ability to incorporate custom metrics\n* Ready-made Dockerfile for easy deployment\n* HTTPS support for secure deployment", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
+{"page_content": "To learn more about the APIs and the design of this feature, see the links below:\n* See for a full multi-node deployment reference architecture.\n* The full documentation can be found [here](https://pytorch.org/serve).\n\n## TorchElastic integration with Kubernetes (Experimental)", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
+{"page_content": "[TorchElastic](https://github.com/pytorch/elastic) is a proven library for training large scale deep neural networks at scale within companies like Facebook, where having the ability to dynamically adapt to server availability and scale as new compute resources come online is critical. Kubernetes enables customers using machine learning frameworks like PyTorch to run training jobs distributed across fleets of powerful GPU instances like the Amazon EC2 P3. Distributed training jobs, however, are not fault-tolerant, and a job cannot continue if a node failure or reclamation interrupts training. Further, jobs cannot start without acquiring all required resources, or scale up and down without being restarted. This lack of resiliency and flexibility results in increased training time and costs from idle resources. TorchElastic addresses these limitations by enabling distributed training jobs to be executed in a fault-tolerant and elastic manner. Until today, Kubernetes users needed to manage Pods and Services required for TorchElastic training jobs manually.", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
+{"page_content": "Through the joint collaboration of engineers at Facebook and AWS, TorchElastic, adding elasticity and fault tolerance, is now supported using vanilla Kubernetes and through the managed EKS service from AWS.\n\nTo learn more see the [TorchElastic repo](http://pytorch.org/elastic/0.2.0rc0/kubernetes.html) for the controller implementation and docs on how to use it.\n\n## torch_xla 1.5 now available", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
+{"page_content": "[torch_xla](http://pytorch.org/xla/) is a Python package that uses the [XLA linear algebra compiler](https://www.tensorflow.org/xla) to accelerate the [PyTorch deep learning framework](https://pytorch.org/) on [Cloud TPUs](https://cloud.google.com/tpu/) and [Cloud TPU Pods](https://cloud.google.com/tpu/docs/tutorials/pytorch-pod). torch_xla aims to give PyTorch users the ability to do everything they can do on GPUs on Cloud TPUs as well while minimizing changes to the user experience. The project began with a conversation at NeurIPS 2017 and gathered momentum in 2018 when teams from Facebook and Google came together to create a proof of concept. We announced this collaboration at PTDC 2018 and made the PyTorch/XLA integration broadly available at PTDC 2019. The project already has 28 contributors, nearly 2k commits, and a repo that has been forked more than 100 times.", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
+{"page_content": "This release of [torch_xla](http://pytorch.org/xla/) is aligned and tested with PyTorch 1.5 to reduce friction for developers and to provide a stable and mature PyTorch/XLA stack for training models using Cloud TPU hardware. You can [try it for free](https://medium.com/pytorch/get-started-with-pytorch-cloud-tpus-and-colab-a24757b8f7fc) in your browser on an 8-core Cloud TPU device with [Google Colab](https://colab.research.google.com/), and you can use it at a much larger scaleon [Google Cloud](https://cloud.google.com/gcp).\n\nSee the full torch_xla release notes [here](https://github.com/pytorch/xla/releases). Full docs and tutorials can be found [here](https://pytorch.org/xla/) and [here](https://cloud.google.com/tpu/docs/tutorials).\n\n## PyTorch Domain Libraries", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
+{"page_content": "torchaudio, torchvision, and torchtext complement PyTorch with common datasets, models, and transforms in each domain area. We\u2019re excited to share new releases for all three domain libraries alongside PyTorch 1.5 and the rest of the library updates. For this release, all three domain libraries are removing support for Python2 and will support Python3 only.\n\n### torchaudio 0.5\nThe torchaudio 0.5 release includes new transforms, functionals, and datasets. Highlights for the release include:\n\n* Added the Griffin-Lim functional and transform, `InverseMelScale` and `Vol` transforms, and `DB_to_amplitude`. \n* Added support for `allpass`, `fade`, `bandpass`, `bandreject`, `band`, `treble`, `deemph`, and `riaa` filters and transformations.\n* New datasets added including `LJSpeech` and `SpeechCommands` datasets. \n\nSee the release full notes [here](https://github.com/pytorch/audio/releases) and full docs can be found [here](https://pytorch.org/audio/).", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
+{"page_content": "### torchvision 0.6\nThe torchvision 0.6 release includes updates to datasets, models and a significant number of bug fixes. Highlights include:\n\n* Faster R-CNN now supports negative samples which allows the feeding of images without annotations at training time.\n* Added `aligned` flag to `RoIAlign` to match Detectron2. \n* Refactored abstractions for C++ video decoder\n\nSee the release full notes [here](https://github.com/pytorch/vision/releases) and full docs can be found [here](https://pytorch.org/docs/stable/torchvision/index.html).\n\n### torchtext 0.6\nThe torchtext 0.6 release includes a number of bug fixes and improvements to documentation. Based on user's feedback, dataset abstractions are currently being redesigned also. Highlights for the release include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
+{"page_content": "* Fixed an issue related to the SentencePiece dependency in conda package.\n* Added support for the experimental IMDB dataset to allow a custom vocab.\n* A number of documentation updates including adding a code of conduct and a deduplication of the docs on the torchtext site. \n\nYour feedback and discussions on the experimental datasets API are welcomed. You can send them to [issue #664](https://github.com/pytorch/text/issues/664). We would also like to highlight the pull request [here](https://github.com/pytorch/text/pull/701) where the latest dataset abstraction is applied to the text classification datasets. The feedback can be beneficial to finalizing this abstraction. \n\nSee the release full notes [here](https://github.com/pytorch/text/releases) and full docs can be found [here](https://pytorch.org/text/).\n\n\n*We\u2019d like to thank the entire PyTorch team, the Amazon team and the community for all their contributions to this work.*\n\nCheers!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"Understanding LazyTensor System Performance with PyTorch/XLA on Cloud TPU\"\nauthor: Vaibhav Singh\nfeatured-img: \"\"\n---\n\n## Introduction\n\nEase of use, expressivity, and debuggability are among the core principles of PyTorch. One of the key drivers for the ease of use is that PyTorch execution is by default \u201ceager, i.e. op by op execution preserves the imperative nature of the program. However, eager execution does not offer the compiler based optimization, for example, the optimizations when the computation can be expressed as a graph.\n\nLazyTensor [[1]], first introduced with PyTorch/XLA, helps combine these seemingly disparate approaches. While PyTorch eager execution is widely used, intuitive, and well understood, lazy execution is not as prevalent yet.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
+{"page_content": "In this post we will explore some of the basic concepts of the LazyTensor System with the goal of applying these concepts to understand and debug performance of LazyTensor based implementations in PyTorch. Although we will use PyTorch/XLA on Cloud TPU as the vehicle for exploring these concepts, we hope that these ideas will be useful to understand other system(s) built on LazyTensors.\n\n## LazyTensor\n\nAny operation performed on a PyTorch tensor is by default dispatched as a kernel or a composition of kernels to the underlying hardware. These kernels are executed asynchronously on the underlying hardware. The program execution is not blocked until the value of a tensor is fetched. This approach scales extremely well with massively parallel programmed hardware such as GPUs.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
+{"page_content": "The starting point of a LazyTensor system is a custom tensor type. In PyTorch/XLA, this type is called XLA tensor. In contrast to PyTorch\u2019s native tensor type, operations performed on XLA tensors are recorded into an IR graph. Let\u2019s examine an example that sums the product of two tensors:\n\n```python\nimport torch\nimport torch_xla\nimport torch_xla.core.xla_model as xm\n\ndev = xm.xla_device()\n\nx1 = torch.rand((3, 3)).to(dev)\nx2 = torch.rand((3, 8)).to(dev)\n\ny1 = torch.einsum('bs,st->bt', x1, x2)\nprint(torch_xla._XLAC._get_xla_tensors_text([y1]))\n```\n\nYou can execute [this](https://github.com/ultrons/xla/blob/lazy-tensor-post/contrib/colab/LazyTensor_Basics.ipynb) colab notebook to examine the resulting graph for y1. Notice that no computation has been performed yet.\n\n```python\ny1 = y1 + x2\nprint(torch_xla._XLAC._get_xla_tensors_text([y1]))\n```", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
+{"page_content": "The operations will continue until PyTorch/XLA encounters a barrier. This barrier can either be a [mark step()](https://github.com/pytorch/xla/blob/ff079bb48744e5aa6696201ccf34057f15fc7cac/torch_xla/core/xla_model.py#L751) api call or any other event which forces the execution of the graph recorded so far.\n\n```python\nxm.mark_step()\nprint(torch_xla._XLAC._get_xla_tensors_text([y1]))\n```\n\nOnce the mark_step() is called, the graph is compiled and then executed on TPU, i.e. the tensors have been materialized. Therefore, the graph is now reduced to a single line y1 tensor which holds the result of the computation.\n\n### Compile Once, Execute Often", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
+{"page_content": "XLA compilation passes offer optimizations (e.g. op-fusion, which reduces HBM pressure by using scratch-pad memory for multiple ops, [ref](https://arxiv.org/pdf/2004.13336.pdf) ) and leverages lower level XLA infrastructure to optimally use the underlying hardware. However, there is one caveat, compilation passes are expensive, i.e. can add to the training step time. Therefore, this approach scales well if and only if we can **compile once and execute often** (compilation cache helps, such that the same graph is not compiled more than once).\n\nIn the following example, we create a small computation graph and time the execution:\n\n```python\ny1 = torch.rand((3, 8)).to(dev)\ndef dummy_step() :\n y1 = torch.einsum('bs,st->bt', y1, x)\n xm.mark_step()\n return y1\n```\n\n```python\n%timeit dummy_step\n```\n\n```python\nThe slowest run took 29.74 times longer than the fastest. This could mean that an intermediate result is being cached.\n10000000 loops, best of 5: 34.2 ns per loop\n```", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
+{"page_content": "You notice that the slowest step is quite longer than the fastest. This is because of the graph compilation overhead which is incurred only once for a given shape of graph, input shape, and output shape. Subsequent steps are faster because no graph compilation is necessary.\n\nThis also implies that we expect to see performance cliffs when the \u201ccompile once and execute often\u201d assumption breaks. Understanding when this assumption breaks is the key to understanding and optimizing the performance of a LazyTensor system. Let\u2019s examine what triggers the compilation.\n\n### Graph Compilation and Execution and LazyTensor Barrier", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
+{"page_content": "We saw that the computation graph is compiled and executed when a LazyTensor barrier is encountered. There are three scenarios when the LazyTensor barrier is automatically or manually introduced. The first is the explicit call of mark_step() api as shown in the preceding example. mark_step() is also called implicitly at every step when you wrap your dataloader with MpDeviceLoader (highly recommended to overlap compute and data upload to TPU device). The [Optimizer step](https://github.com/pytorch/xla/blob/master/torch_xla/core/xla_model.py#L804) method of xla_model also allows to implicitly call mark_step (when you set barrier=True).", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
+{"page_content": "The second scenario where a barrier is introduced is when PyTorch/XLA finds an op with no mapping (lowering) to equivalent XLA HLO ops. PyTorch has [2000+](https://dev-discuss.pytorch.org/t/where-do-the-2000-pytorch-operators-come-from-more-than-you-wanted-to-know/373) operations. Although most of these operations are composite (i.e. can be expressed in terms of other fundamental operations), some of these operations do not have corresponding lowering in XLA.\n\n\n
\n
\n\nWhat happens when an op with no XLA lowering is used? PyTorch XLA stops the operation recording and cuts the graph(s) leading to the input(s) of the unlowered op. This cut graph is then compiled and dispatched for execution. The results (materialized tensor) of execution are sent back from device to host, the unlowered op is then executed on the host (cpu), and then downstream LazyTensor operations creating a new graph(s) until a barrier is encountered again.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
+{"page_content": "The third and final scenario which results in a LazyTensor barrier is when there is a control structure/statement or another method which requires the value of a tensor. This statement would at the minimum cause the execution of the computation graph leading to the tensor (if the graph has already been seen) or cause compilation and execution of both.\n\nOther examples of such methods include .item(), isEqual(). In general, any operation that maps Tensor -> Scalar will cause this behavior.\n\n### Dynamic Graph\n\nAs illustrated in the preceding section, graph compilation cost is amortized if the same shape of the graph is executed many times. It\u2019s because the compiled graph is cached with a hash derived from the graph shape, input shape, and the output shape. If these shapes change it will trigger compilation, and too frequent compilation will result in training time degradation.\n\nLet\u2019s consider the following example:", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
+{"page_content": "```python\ndef dummy_step(x, y, loss, acc=False):\n z = torch.einsum('bs,st->bt', y, x)\n step_loss = z.sum().view(1,)\n if acc:\n loss = torch.cat((loss, step_loss))\n else:\n loss = step_loss\n xm.mark_step()\n return loss\n\n\nimport time\ndef measure_time(acc=False):\n exec_times = []\n iter_count = 100\n x = torch.rand((512, 8)).to(dev)\n y = torch.rand((512, 512)).to(dev)\n loss = torch.zeros(1).to(dev)\n for i in range(iter_count):\n tic = time.time()\n loss = dummy_step(x, y, loss, acc=acc)\n toc = time.time()\n exec_times.append(toc - tic)\n return exec_times\n\ndyn = measure_time(acc=True) # acc= True Results in dynamic graph\nst = measure_time(acc=False) # Static graph, computation shape, inputs and output shapes don't change\n\nimport matplotlib.pyplot as plt\nplt.plot(st, label = 'static graph')\nplt.plot(dyn, label = 'dynamic graph')\nplt.legend()\nplt.title('Execution time in seconds')\n```\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
+{"page_content": "Note that static and dynamic cases have the same computation but dynamic graph compiles every time, leading to the higher overall run-time. In practice, the training step with recompilation can sometimes be an order of magnitude or slower. In the next section we discuss some of the PyTorch/XLA tools to debug training degradation.\n\n### Profiling Training Performance with PyTorch/XLA\n\nPyTorch/XLA profiling consists of two major components. First is the client side profiling. This feature is turned on by simply setting the environment variable PT_XLA_DEBUG to 1. Client side profiling points to unlowered ops or device-to-host transfer in your source code. Client side profiling also reports if there are too frequent compilations happening during the training. You can explore some metrics and counters provided by PyTorch/XLA in conjunction with the profiler in [this](https://github.com/ultrons/xla/blob/lazy-tensor-post/contrib/colab/Exploring_LazyTensor_with_Debug_Metrics.ipynb) notebook.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
+{"page_content": "The second component offered by PyTorch/XLA profiler is the inline trace annotation. For example:\n\n```python\nimport torch_xla.debug.profiler as xp\n\ndef train_imagenet():\n print('==> Preparing data..')\n img_dim = get_model_property('img_dim')\n ....\n server = xp.start_server(3294)\n def train_loop_fn(loader, epoch):\n ....\n model.train()\n for step, (data, target) in enumerate(loader):\n with xp.StepTrace('Train_Step', step_num=step):\n ....\n if FLAGS.amp:\n ....\n else:\n with xp.Trace('build_graph'):\n output = model(data)\n loss = loss_fn(output, target)\n loss.backward()\n xm.optimizer_step(optimizer)\n```\n\nNotice the start_server api call. The port number that you have used here is the same port number you will use with the tensorboard profiler in order to view the op trace similar to:\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
+{"page_content": "Op trace along with the client-side debugging function is a powerful set of tools to debug and optimize your training performance with PyTorch/XLA. For more detailed instructions on the profiler usage, the reader is encouraged to explore blogs [part-1](https://cloud.google.com/blog/topics/developers-practitioners/pytorchxla-performance-debugging-tpu-vm-part-1), [part-2](https://cloud.google.com/blog/topics/developers-practitioners/pytorchxla-performance-debugging-cloud-tpu-vm-part-ii), and [part-3](https://cloud.google.com/blog/topics/developers-practitioners/pytorchxla-performance-debugging-cloud-tpu-vm-part-iii) of the blog series on PyTorch/XLA performance debugging.\n\n### Summary", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
+{"page_content": "### Summary\n\nIn this article we have reviewed the fundamentals of the LazyTensor system. We built on those fundamentals with PyTorch/XLA to understand the potential causes of training performance degradation. We discussed why \u201ccompile once and execute often\u201d helps to get the best performance on LazyTensor systems, and why training slows down when this assumption breaks.\n\nWe hope that PyTorch users will find these insights helpful for their novel works with LazyTensor systems.\n\n### Acknowledgements\n\nA big thank you to my outstanding colleagues Jack Cao, Milad Mohammedi, Karl Weinmeister, Rajesh Thallam, Jordan Tottan (Google) and Geeta Chauhan (Meta) for their meticulous reviews and feedback. And thanks to the extended PyTorch/XLA development team from Google, Meta, and the open source community to make PyTorch possible on TPUs. And finally, thanks to the authors of the [LazyTensor paper](https://arxiv.org/pdf/2102.13267.pdf) not only for developing LazyTensor but also for writing such an accessible paper.", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
+{"page_content": "## Refrences\n\n[[1]] LazyTensor: combining eager execution with domain-specific compilers\n\n[1]: https://arxiv.org/pdf/2102.13267.pdf", "metadata": {"source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"Extending TorchVision\u2019s Transforms to Object Detection, Segmentation & Video tasks\"\nauthor: Philip Meier, Victor Fomin, Vasilis Vryniotis, Nicolas Hug\nfeatured-img: \"assets/images/Transforms-v2-feature-image.png\"\n---\n\n**Note**: A previous version of this post was published in November 2022. We have updated this post with the most up-to-date info, in view of the upcoming 0.15 release of torchvision in March 2023, jointly with PyTorch 2.0.\n\nTorchVision is extending its Transforms API! Here is what\u2019s new:\n\n- You can use them not only for Image Classification but also for Object Detection, Instance & Semantic Segmentation and Video Classification.\n- You can use new functional transforms for transforming Videos, Bounding Boxes and Segmentation Masks.", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
+{"page_content": "The API is completely backward compatible with the previous one, and remains the same to assist the migration and adoption. We are now releasing this new API as Beta in the torchvision.transforms.v2 namespace, and we would love to get early feedback from you to improve its functionality. Please [_reach out to us_](https://github.com/pytorch/vision/issues/6753) if you have any questions or suggestions.\n\n## Limitations of current Transforms\n\nThe existing Transforms API of TorchVision (aka V1) only supports single images. As a result it can only be used for classification tasks:\n\n```python\nfrom torchvision import transforms\ntrans = transforms.Compose([\n transforms.ColorJitter(contrast=0.5),\n transforms.RandomRotation(30),\n transforms.CenterCrop(480),\n])\nimgs = trans(imgs)\n```", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
{"page_content": "The above approach doesn\u2019t support Object Detection nor Segmentation. This limitation made any non-classification Computer Vision tasks second-class citizens as one couldn\u2019t use the Transforms API to perform the necessary augmentations. Historically this made it difficult to train high-accuracy models using TorchVision\u2019s primitives and thus our Model Zoo lagged by several points from SoTA.", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "To circumvent this limitation, TorchVision offered [_custom implementations_](https://github.com/pytorch/vision/blob/main/references/detection/transforms.py) in its reference scripts that show-cased how one could perform augmentations in each task. Though this practice enabled us to train high accuracy [_classification_](https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/), [_object detection &", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "segmentation_](https://pytorch.org/blog/pytorch-1.12-new-library-releases/#beta-object-detection-and-instance-segmentation) models, it was a hacky approach which made those transforms impossible to import from the TorchVision binary.", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "The new Transforms API\n\nThe Transforms V2 API supports videos, bounding boxes, and segmentation masks meaning that it offers native support for many Computer Vision tasks. The new solution is a drop-in replacement:\n\n```python\nimport torchvision.transforms.v2 as transforms\n\n# Exactly the same interface as V1:\ntrans = transforms.Compose([\n transforms.ColorJitter(contrast=0.5),\n transforms.RandomRotation(30),\n transforms.CenterCrop(480),\n])\nimgs, bboxes, labels = trans(imgs, bboxes, labels)", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "The new Transform Classes can receive any arbitrary number of inputs without enforcing specific order or structure:\n\n```python\n# Already supported:\ntrans(imgs) # Image Classification\ntrans(videos) # Video Tasks\ntrans(imgs, bboxes, labels) # Object Detection\ntrans(imgs, bboxes, masks, labels) # Instance Segmentation\ntrans(imgs, masks) # Semantic Segmentation\ntrans({\"image\": imgs, \"box\": bboxes, \"tag\": labels}) # Arbitrary Structure", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "# Future support:\ntrans(imgs, bboxes, labels, keypoints) # Keypoint Detection\ntrans(stereo_images, disparities, masks) # Depth Perception\ntrans(image1, image2, optical_flows, masks) # Optical Flow\ntrans(imgs_or_videos, labels) # MixUp/CutMix-style Transforms", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "The Transform Classes make sure that they apply the same random transforms to all the inputs to ensure consistent results.\n\nThe functional API has been updated to support all necessary signal processing kernels (resizing, cropping, affine transforms, padding etc) for all inputs:\n\n```python\nfrom torchvision.transforms.v2 import functional as F", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "# High-level dispatcher, accepts any supported input type, fully BC\nF.resize(inpt, size=[224, 224])\n# Image tensor kernel\nF.resize_image_tensor(img_tensor, size=[224, 224], antialias=True) \n# PIL image kernel\nF.resize_image_pil(img_pil, size=[224, 224], interpolation=BILINEAR)\n# Video kernel\nF.resize_video(video, size=[224, 224], antialias=True) \n# Mask kernel\nF.resize_mask(mask, size=[224, 224])\n# Bounding box kernel\nF.resize_bounding_box(bbox, size=[224, 224], spatial_size=[256, 256])", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "Under the hood, the API uses Tensor subclassing to wrap the input, attach useful meta-data and dispatch to the right kernel. For your data to be compatible with these new transforms, you can either use the provided dataset wrapper which should work with most of torchvision built-in datasets, or your can wrap your data manually into Datapoints:", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "```python\nfrom torchvision.datasets import wrap_dataset_for_transforms_v2\nds = CocoDetection(..., transforms=v2_transforms)\nds = wrap_dataset_for_transforms_v2(ds) # data is now compatible with transforms v2!\n\n# Or wrap your data manually using the lower-level Datapoint classes:\nfrom torchvision import datapoints\n\nimgs = datapoints.Image(images)\nvids = datapoints.Video(videos)\nmasks = datapoints.Mask(target[\"masks\u201c])\nbboxes = datapoints.BoundingBox(target[\"boxes\u201c], format=\u201dXYXY\u201d, spatial_size=imgs.shape)", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "In addition to the new API, we now provide importable implementations for several data augmentations that are used in SoTA research such as [_Large Scale Jitter_](https://github.com/pytorch/vision/blob/928b05cad36eadb13e169f03028767c8bcd1f21d/torchvision/transforms/v2/_geometry.py#L1109), [_AutoAugmentation_](https://github.com/pytorch/vision/blob/main/torchvision/transforms/v2/_auto_augment.py) methods and [_several_](https://github.com/pytorch/vision/blob/main/torchvision/transforms/v2/__init__.py) new", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "Geometric, Color and Type Conversion transforms.", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "The API continues to support both PIL and Tensor backends for Images, single or batched input and maintains JIT-scriptability on both the functional and class APIs.. The new API has been [_verified_](https://github.com/pytorch/vision/pull/6433#issuecomment-1256741233) to achieve the same accuracy as the previous implementation.", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "An end-to-end example\n\n Here is an example of the new API using the following [_image_](https://user-images.githubusercontent.com/5347466/195350223-8683ef25-1367-4292-9174-c15f85c7358e.jpg). It works both with PIL images and Tensors. For more examples and tutorials, [_take a look at our gallery!_](https://pytorch.org/vision/0.15/auto_examples/index.html)", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "```python\nfrom torchvision import io, utils\nfrom torchvision import datapoints\nfrom torchvision.transforms import v2 as T\nfrom torchvision.transforms.v2 import functional as F", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "# Defining and wrapping input to appropriate Tensor Subclasses\npath = \"COCO_val2014_000000418825.jpg\"\nimg = datapoints.Image(io.read_image(path))\n# img = PIL.Image.open(path)\nbboxes = datapoints.BoundingBox(\n [[2, 0, 206, 253], [396, 92, 479, 241], [328, 253, 417, 332],\n [148, 68, 256, 182], [93, 158, 170, 260], [432, 0, 438, 26],\n [422, 0, 480, 25], [419, 39, 424, 52], [448, 37, 456, 62],\n [435, 43, 437, 50], [461, 36, 469, 63], [461, 75, 469, 94],", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "[469, 36, 480, 64], [440, 37, 446, 56], [398, 233, 480, 304],\n [452, 39, 463, 63], [424, 38, 429, 50]],\n format=datapoints.BoundingBoxFormat.XYXY,\n spatial_size=F.get_spatial_size(img),\n)\nlabels = [59, 58, 50, 64, 76, 74, 74, 74, 74, 74, 74, 74, 74, 74, 50, 74, 74]\n# Defining and applying Transforms V2\ntrans = T.Compose(\n [\n T.ColorJitter(contrast=0.5),\n T.RandomRotation(30),\n T.CenterCrop(480),\n ]\n)\nimg, bboxes, labels = trans(img, bboxes, labels)", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "# Visualizing results\nviz = utils.draw_bounding_boxes(F.to_image_tensor(img), boxes=bboxes)\nF.to_pil_image(viz).show()\n```", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "Development milestones and future work\n\nHere is where we are in development:", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "- [x] Design API\n- [x] Write Kernels for transforming Videos, Bounding Boxes, Masks and Labels\n- [x] Rewrite all existing Transform Classes (stable + references) on the new API:\n - [x] Image Classification\n - [x] Video Classification\n - [x] Object Detection\n - [x] Instance Segmentation\n - [x] Semantic Segmentation\n- [x] Verify the accuracy of the new API for all supported Tasks and Backends\n- [x] Speed Benchmarks and Performance Optimizations (in progress - planned for Dec)", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "- [x] Graduate from Prototype (planned for Q1)\n- [ ] Add support of Depth Perception, Keypoint Detection, Optical Flow and more (future)\n- [ ] Add smooth support for batch-wise transforms like MixUp and CutMix", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "We would love to get [_feedback_](https://github.com/pytorch/vision/issues/6753) from you to improve its functionality. Please reach out to us if you have any questions or suggestions.", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"New Library Updates in PyTorch 1.13\"\nauthor: Team PyTorch\nfeatured-img: \"assets/images/new-library-updates-in-pytorch-1.13-2.jpg\"\n---", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "Summary\n\nWe are bringing a number of improvements to the current PyTorch libraries, alongside the PyTorch 1.13 [release](https://github.com/pytorch/pytorch/releases). These updates demonstrate our focus on developing common and extensible APIs across all domains to make it easier for our community to build ecosystem projects on PyTorch.\n\nAlong with **1.13**, we are releasing updates to the PyTorch Libraries, please find them below.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "TorchAudio", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) Hybrid Demucs Model and Pipeline\n\nHybrid Demucs is a music source separation model that uses both spectrogram and time domain features. It has demonstrated state-of-the-art performance in the Sony\u00ae Music DeMixing Challenge. (citation: [https://arxiv.org/abs/2111.03600](https://arxiv.org/abs/2111.03600))\n\nThe TorchAudio v0.13 release includes the following features", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "- MUSDB_HQ Dataset, which is used in Hybrid Demucs training ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.MUSDB_HQ.html#torchaudio.datasets.MUSDB_HQ))\n- Hybrid Demucs model architecture ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.models.HDemucs.html#torchaudio.models.HDemucs))\n- Three factory functions suitable for different sample rate ranges\n- Pre-trained pipelines ([docs](https://pytorch.org/audio/0.13.0/pipelines.html#id46))", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "- SDR Results of pre-trained pipelines on MUSDB_HQ test set\n- Tutorial that steps through music source separation using the pretrained pipeline ([docs](https://pytorch.org/audio/0.13.0/tutorials/hybrid_demucs_tutorial.html))", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "| Pipeline | All | Drums | Bass | Other | Vocals |\n|----------------------------------------|-------|-------|--------|-------|--------|\n| HDEMUCS_HIGH_MUSDB* | 6.42 | 7.76 | 6.51 | 4.47 | 6.93 |\n| HDEMUCS_HIGH_MUSDB_PLUS** | 9.37 | 11.38 | 10.53 | 7.24 | 8.32 |", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "* Trained on the training data of MUSDB-HQ dataset.
** Trained on both training and test sets of MUSDB-HQ and 150 extra songs from an internal database that were specifically produced for Meta.
\n\n```python\nfrom torchaudio.pipelines import HDEMUCS_HIGH_MUSDB_PLUS\n\nbundle = HDEMUCS_HIGH_MUSDB_PLUS\nmodel = bundle.get_model()\nsources_list = model.sources\n\nmixture, samplerate = torchaudio.load(\"song.wav\")\nsources = model(mixture)\naudios = dict(zip(sources_list, sources)", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "Special thanks to Alexandre Defossez for the guidance.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) Datasets and Metadata Mode for SUPERB Benchmark\n\nTorchAudio adds support for various audio-related datasets used in downstream tasks for benchmarking self-supervised learning models. With the addition of several new datasets, there is now support for the downstream tasks in version 1 of the [SUPERB benchmark](https://superbbenchmark.org/), which can be found in the [s3prl repository](https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/docs/superb.md).", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "For these datasets, we also add metadata support through a `get_metadata` function, enabling faster dataset iteration or preprocessing without the need to load waveforms. The function returns the same features as `__getitem__`, except it returns the relative waveform path rather than the loaded waveform.\n\nDatasets with metadata functionality", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "- LIBRISPEECH ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.LIBRISPEECH.html#torchaudio.datasets.LIBRISPEECH))\n- LibriMix ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.LibriMix.html#torchaudio.datasets.LibriMix))\n- QUESST14 ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.QUESST14.html#torchaudio.datasets.QUESST14))", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "- SPEECHCOMMANDS ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.SPEECHCOMMANDS.html#torchaudio.datasets.SPEECHCOMMANDS))\n- (new) FluentSpeechCommands ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.FluentSpeechCommands.html#torchaudio.datasets.FluentSpeechCommands))\n- (new) Snips ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.Snips.html#torchaudio.datasets.Snips))", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "- (new) IEMOCAP ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.IEMOCAP.html#torchaudio.datasets.IEMOCAP))\n- (new) VoxCeleb1 ([Identification](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.VoxCeleb1Identification.html#torchaudio.datasets.VoxCeleb1Identification), [Verification](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.VoxCeleb1Verification.html#torchaudio.datasets.VoxCeleb1Verification))", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) Custom Language Model support in CTC Beam Search Decoding\n\nTorchAudio released a CTC beam search decoder in release 0.12, with KenLM language model support. This release, there is added functionality for creating custom Python language models that are compatible with the decoder, using the `torchaudio.models.decoder.CTCDecoderLM` wrapper.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "For more information on using a custom language model, please refer to the [documentation](https://pytorch.org/audio/0.13.0/generated/torchaudio.models.decoder.CTCDecoder.html#ctcdecoderlm) and [tutorial](https://pytorch.org/audio/0.13.0/tutorials/asr_inference_with_ctc_decoder_tutorial.html#custom-language-model).", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) StreamWriter\n\ntorchaudio.io.StreamWriter is a class for encoding media including audio and video. This can handle a wide variety of codecs, chunk-by-chunk encoding and GPU encoding.\n\n```python\nwriter = StreamWriter(\"example.mp4\")\nwriter.add_audio_stream(\n sample_rate=16_000,\n num_channels=2,\n)\nwriter.add_video_stream(\n frame_rate=30,\n height=96,\n width=128,\n format=\"rgb24\",\n)\nwith writer.open():\n writer.write_audio_chunk(0, audio)\n writer.write_video_chunk(1, video)", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "For more information, refer to [the documentation](https://pytorch.org/audio/0.13.0/generated/torchaudio.io.StreamWriter.html) and the following tutorials\n- [StreamWriter Basic Usage](https://pytorch.org/audio/0.13.0/tutorials/streamwriter_basic_tutorial.html)\n- [StreamWriter Advanced Usage](https://pytorch.org/audio/0.13.0/tutorials/streamwriter_advanced.html)\n- [Hardware-Accelerated Video Decoding and Encoding](https://pytorch.org/audio/0.13.0/hw_acceleration_tutorial.html)", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "TorchData\n\nFor a complete list of changes and new features, please visit [our repository\u2019s 0.5.0 release note](https://github.com/pytorch/data/releases).", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "(Prototype) DataLoader2\n\n`DataLoader2` was introduced in the last release to execute `DataPipe` graph, with support for dynamic sharding for multi-process/distributed data loading, multiple backend ReadingServices, and `DataPipe` graph in-place modification (e.g. shuffle control).", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "In this release, we further consolidated the API for `DataLoader2` and a [detailed documentation is now available here](https://pytorch.org/data/0.5/dataloader2.html). We continue to welcome early adopters and feedback, as well as potential contributors. If you are interested in trying it out, we encourage you to install the nightly version of TorchData.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) Data Loading from Cloud Service Providers\n\nWe extended our support to load data from additional cloud storage providers via DataPipes, now covering AWS, Google Cloud Storage, and Azure. A [tutorial is also available](https://pytorch.org/data/0.5/tutorial.html#working-with-cloud-storage-providers). We are open to feedback and feature requests.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "We also performed a simple benchmark, comparing the performance of data loading from AWS S3 and attached volume on an AWS EC2 instance. The results are [visible here](https://github.com/pytorch/data/blob/gh/NivekT/100/head/benchmarks/cloud/aws_s3_results.md).", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "torch::deploy (Beta)\n\ntorch::deploy is now in Beta! torch::deploy is a C++ library for Linux based operating systems that allows you to run multiple Python interpreters in a single process. You can run your existing eager PyTorch models without any changes for production inference use cases. Highlights include:", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "- Existing models work out of the box\u2013no need to modify your python code to support tracing.\n- Full support for your existing Python environment including C extensions.\n- No need to cross process boundaries to load balance in multi-GPU serving environments.\n- Model weight can be shared between multiple Python interpreters.\n- A vastly improved installation and setup process.\n\n```Python\ntorch::deploy::InterpreterManager manager(4);\n\n// access one of the 4 interpreters\nauto I = manager.acquireOne();", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "// run infer from your_model.py\nI.global(\"your_model\", \"infer\")({at::randn({10, 240, 320})});", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "Learn more [here](https://github.com/pytorch/multipy).", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) CUDA/ROCm/CPU Backends\n\ntorch::deploy now links against standard PyTorch Python distributions so all accelerators that PyTorch core supports such as CUDA and AMD/HIP work out of the box.\n\n- Can install any device variant of PyTorch via pip/conda like normal.\n- [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/)", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "(Prototype) aarch64/arm64 support\n\ntorch::deploy now has basic support for aarch64 Linux systems.\n\n- We're looking to gather feedback on it and learn more about arm use cases for eager PyTorch models.\n- Learn more / share your use case at [https://github.com/pytorch/multipy/issues/64](https://github.com/pytorch/multipy/issues/64)", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "TorchEval", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "(Prototype) Introducing Native Metrics Support for PyTorch\n\nTorchEval is a library built for users who want highly performant implementations of common metrics to evaluate machine learning models. It also provides an easy to use interface for building custom metrics with the same toolkit. Building your metrics with TorchEval makes running distributed training loops with [torch.distributed](https://pytorch.org/docs/stable/distributed.html) a breeze.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "Learn more with our [docs](https://pytorch.org/torcheval), see our [examples](https://pytorch.org/torcheval/metric_example.html), or check out our [GitHub repo](http://github.com/pytorch/torcheval).", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "TorchMultimodal Release (Beta)\n\nPlease watch for upcoming blogs in early November that will introduce TorchMultimodal, a PyTorch domain library for training SoTA multi-task multimodal models at scale, in more details; in the meantime, play around with the library and models through our [tutorial](https://pytorch.org/tutorials/beginner/flava_finetuning_tutorial.html).", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "TorchRec", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "(Prototype) Simplified Optimizer Fusion APIs", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "We\u2019ve provided a simplified and more intuitive API for setting fused optimizer settings via apply_optimizer_in_backward. This new approach enables the ability to specify optimizer settings on a per-parameter basis and sharded modules will configure [FBGEMM\u2019s TableBatchedEmbedding modules accordingly](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops.py#L181). Additionally, this now let's TorchRec\u2019s planner account for optimizer memory usage. This should", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "alleviate reports of sharding jobs OOMing after using Adam using a plan generated from planner.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "(Prototype) Simplified Sharding APIs", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "We\u2019re introducing the shard API, which now allows you to shard only the embedding modules within a model, and provides an alternative to the current main entry point - DistributedModelParallel. This lets you have a finer grained control over the rest of the model, which can be useful for customized parallelization logic, and inference use cases (which may not require any parallelization on the dense layers). We\u2019re also introducing construct_module_sharding_plan, providing a simpler interface to the TorchRec", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "sharder.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) Quantized Comms", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "Applying [quantization or mixed precision](https://dlp-kdd.github.io/assets/pdf/a11-yang.pdf) to tensors in a collective call during model parallel training greatly improves training efficiency, with little to no effect on model quality. TorchRec now integrates with the [quantized comms library provided by FBGEMM GPU](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/fbgemm_gpu/quantize_comm.py) and provides an interface to construct encoders and decoders (codecs) that surround the all_to_all, and", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "reduce_scatter collective calls in the output_dist of a sharded module. We also allow you to construct your own codecs to apply to your sharded module. The codces provided by FBGEMM allow FP16, BF16, FP8, and INT8 compressions, and you may use different quantizations for the forward pass and backward pass.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "TorchSnapshot (Beta)\n\nAlong with PyTorch 1.13, we are releasing the beta version of TorchSnapshot, which is a performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind. Highlights include:", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "- Performance: TorchSnapshot provides a fast checkpointing implementation employing various optimizations, including zero-copy serialization for most tensor types, overlapped device-to-host copy and storage I/O, parallelized storage I/O\n- Memory Use: TorchSnapshot's memory usage adapts to the host's available resources, greatly reducing the chance of out-of-memory issues when saving and loading checkpoints\n- Usability: Simple APIs that are consistent between distributed and non-distributed workloads", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "Learn more with our [tutorial](https://pytorch.org/torchsnapshot/main/getting_started.html).", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "TorchVision", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "We are happy to introduce torchvision v0.14 [(release note)](https://github.com/pytorch/vision/releases). This version introduces a new [model registration API](https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/) to help users retrieving and listing models and weights. It also includes new image and video classification models such as MViT, S3D, Swin Transformer V2, and MaxViT. Last but not least, we also have new primitives and augmentation such as PolynomicalLR", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "scheduler and SimpleCopyPaste.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) Model Registration API", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "Following up on the [multi-weight support API](https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/) that was released on the previous version, we have added a new [model registration API](https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/) to help users retrieve models and weights. There are now 4 new methods under the torchvision.models module: get_model, get_model_weights, get_weight, and list_models. Here are examples of how we can use", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "them:", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "```Python\nimport torchvision\nfrom torchvision.models import get_model, get_model_weights, list_models\n\n\nmax_params = 5000000\n\ntiny_models = []\nfor model_name in list_models(module=torchvision.models):\n weights_enum = get_model_weights(model_name)\n if len([w for w in weights_enum if w.meta[\"num_params\"] <= max_params]) > 0:\n tiny_models.append(model_name)\n\nprint(tiny_models)\n# ['mnasnet0_5', 'mnasnet0_75', 'mnasnet1_0', 'mobilenet_v2', ...]", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "model = get_model(tiny_models[0], weights=\"DEFAULT\")\nprint(sum(x.numel() for x in model.state_dict().values()))\n# 2239188\n```", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) New Video Classification Models\n\nWe added two new video classification models, MViT and S3D. MViT is a state of the art video classification transformer model which has 80.757% accuracy on the Kinetics400 dataset, while S3D is a relatively small model with good accuracy for its size. These models can be used as follows:\n\n```Python\nimport torch\nfrom torchvision.models.video import *", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "video = torch.rand(3, 32, 800, 600)\nmodel = mvit_v2_s(weights=\"DEFAULT\")\n# model = s3d(weights=\"DEFAULT\")\nmodel.eval()\nprediction = model(images)", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "Here is the table showing the accuracy of the new video classification models tested in the Kinetics400 dataset.\n\n| **Model** | **Acc@1** | **Acc@5** |\n|--------------------------------|-----------|-----------|\n| mvit_v1_b | 81.474 | 95.776 |\n| mvit_v2_s | 83.196 | 96.36 |\n| s3d | 83.582 | 96.64 |", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "We would like to thank Haoqi Fan, Yanghao Li, Christoph Feichtenhofer and Wan-Yen Lo for their work on [PyTorchVideo](https://github.com/facebookresearch/pytorchvideo/) and their support during the development of the MViT model. We would like to thank Sophia Zhi for her contribution implementing the S3D model in torchvision.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "(Stable) New Architecture and Model Variants\n\nFor Classification Models, we\u2019ve added the Swin Transformer V2 architecture along with pre-trained weights for its tiny/small/base variants. In addition, we have added support for the MaxViT transformer. Here is an example on how to use the models:\n\n```Python\nimport torch\nfrom torchvision.models import *\n\nimage = torch.rand(1, 3, 224, 224)\nmodel = swin_v2_t(weights=\"DEFAULT\").eval()\n# model = maxvit_t(weights=\"DEFAULT\").eval()\nprediction = model(image)", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "Here is the table showing the accuracy of the models tested on ImageNet1K dataset.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "| **Model** | **Acc@1** | **Acc@1 change over V1** | **Acc@5** | **Acc@5 change over V1** |\n|---------------|-----------|--------------------------|-----------|--------------------------|\n| swin_v2_t | 82.072 | + 0.598 | 96.132 | + 0.356 |\n| swin_v2_s | 83.712 | + 0.516 | 96.816 | + 0.456 |\n| swin_v2_b | 84.112 | + 0.530 | 96.864 | + 0.224 |", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "| maxvit_t | 83.700 | - | 96.722 | - |", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "We would like to thank [Ren Pang](https://github.com/ain-soph) and [Teodor Poncu](https://github.com/TeodorPoncu) for contributing the 2 models to torchvision.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "(Stable) New Primitives & Augmentations", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "In this release we\u2019ve added the [SimpleCopyPaste](https://arxiv.org/abs/2012.07177) augmentation in our reference scripts and we up-streamed the PolynomialLR scheduler to PyTorch Core. We would like to thank [Lezwon Castelino](https://github.com/lezwon) and [Federico Pozzi](https://github.com/federicopozzi33) for their contributions. We are continuing our efforts to modernize TorchVision by adding more SoTA primitives, Augmentations and architectures with the help of our community. If you are interested in", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "contributing, have a look at the following [issue](https://github.com/pytorch/vision/issues/6323).", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "Torch-TensorRT", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "(Prototype) TensorRT with FX2TRT frontend\n\nTorch-TensorRT is the PyTorch integration for TensorRT, providing high performance inference on NVIDIA GPUs. Torch-TRT allows for optimizing models directly in PyTorch for deployment providing up to 6x performance improvement.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "Torch-TRT is an AoT compiler which ingests an nn.Module or TorchScript module, optimizes compatible subgraphs in TensorRT & leaves the rest to run in PyTorch. This gives users the performance of TensorRT, but the usability and familiarity of Torch.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "Torch-TensorRT is part of the PyTorch ecosystem, and was released as v1.0 in November \u201821. There are currently two distinct front-ends: Torchscript & FX. Each provides the same value proposition and underlying operation with the primary difference being the input & output formats (TS vs FX / Python).\n\nThe Torchscript front-end was included in v1.0 and should be considered stable. The FX front-end is first released in v1.2 and should be considered a Beta.\n\nRelevant Links:", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "- [Github](https://github.com/pytorch/TensorRT)\n- [Documentation](https://pytorch.org/TensorRT/)\n- [Generic (TS) getting started guide](https://pytorch.org/TensorRT/getting_started/getting_started_with_python_api.html)\n- [FX getting started guide](https://pytorch.org/TensorRT/tutorials/getting_started_with_fx_path.html)", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "(Stable) Introducing Torch-TensorRT", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "Torch-TensorRT is an integration for PyTorch that leverages inference optimizations of TensorRT on NVIDIA GPUs. It takes advantage of TensorRT optimizations, such as FP16 and INT8 reduced precision, graph optimization, operation fusion, etc. while offering a fallback to native PyTorch when TensorRT does not support the model subgraphs. Currently, there are two frontend paths existing in the library that help to convert a PyTorch model to tensorRT engine. One path is through Torch Script (TS) and the other", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "is through FX frontend. That being said, the models are traced by either TS or FX into their IR graph and then converted to TensorRT from it.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "Learn more with our [tutorial](https://pytorch.org/TensorRT/).", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "TorchX\n\nTorchX 0.3 updates include a new list API, experiment tracking, elastic training and improved scheduler support. There\u2019s also a new Multi-Objective NAS [tutorial](https://pytorch.org/tutorials/intermediate/ax_multiobjective_nas_tutorial.html) using TorchX + Ax.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "(Prototype) List\n\nThe newly added list command and API allows you to list recently launched jobs and their statuses for a given scheduler directly from within TorchX.\n\n- This removes the need for using secondary tools to list the jobs.\n- Full programmatic access to recent jobs for integration with custom tools.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "```Python\n$ torchx list -s kubernetes\nAPP HANDLE APP STATUS\n----------------------------------------------- -----------------\nkubernetes://torchx/default:train-f2nx4459p5crr SUCCEEDED", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "Learn more with our [documentation](https://pytorch.org/torchx/main/schedulers.html#torchx.schedulers.Scheduler.list).", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "(Prototype) Tracker\n\nTorchX Tracker is a new prototype library that provides a flexible and customizable experiment and artifact tracking interface. This allows you to track inputs and outputs for jobs across multiple steps to make it easier to use TorchX with pipelines and other external systems.\n\n```Python\nfrom torchx import tracker", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "app_run = tracker.app_run_from_env()\napp_run.add_metadata(lr=lr, gamma=gamma) # hyper parameters\napp_run.add_artifact(\"model\", \"storage://path/mnist_cnn.pt\") # logs / checkpoints\napp_run.add_source(parent_run_id, \"model\") # lineage", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "Example:\n\n- [https://github.com/pytorch/torchx/tree/main/torchx/examples/apps/tracker](https://github.com/pytorch/torchx/tree/main/torchx/examples/apps/tracker)\n- [https://pytorch.org/torchx/main/tracker.html](https://pytorch.org/torchx/main/tracker.html)", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "(Prototype) Elastic Training and Autoscaling\n\nElasticity on Ray and Kubernetes \u2013 automatic scale up of distributed training jobs when using a supported scheduler. Learn more with our [documentation](https://pytorch.org/torchx/main/components/distributed.html).\n\n#### (Prototype) Scheduler Improvements: IBM\u00ae Spectrum LSF\n\nAdded prototype support for the IBM Spectrum LSF scheduler.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) AWS Batch Scheduler\n\nThe AWS Batch scheduler integration is now in beta.\n\n- log fetching and listing jobs is now supported.\n- Added configs for job priorities and queue policies\n- Easily access job UI via ui_url\n[https://pytorch.org/torchx/main/schedulers/aws_batch.html](https://pytorch.org/torchx/main/schedulers/aws_batch.html)", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "(Prototype) AnyPrecision Optimizer \n\nDrop in replacement for AdamW optimizer that reduces GPU memory, enables two main features:\n\n- Ability to successfully train the entire model pipeline in full BFloat16.\nKahan summation ensures precision. This can improve training throughput, especially on huge models, by reduced memory and increased computation speed.\n- Ability to change the variance state to BFloat16. This can reduce overall memory required for model training with additional speed improvements.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "Find more information [here](https://github.com/pytorch/torchdistx/pull/52).", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"PyTorch 1.11, TorchData, and functorch are now available\"\nauthor: Team PyTorch\nfeatured-img: \"assets/images/pytorch-logo.jpg\"\n---\n\nWe are excited to announce the release of PyTorch 1.11 ([release notes](https://github.com/pytorch/pytorch/releases/tag/v1.11.0)). This release is composed of over 3,300 commits since 1.10, made by 434 contributors. Along with 1.11, we are releasing beta versions of TorchData and functorch.\n\nSummary:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
-{"page_content": "* **TorchData** is a new library for common modular data loading primitives for easily constructing flexible and performant data pipelines. [View it on GitHub](https://github.com/pytorch/data).\n* **functorch**, a library that adds composable function transforms to PyTorch, is now available in beta. [View it on GitHub](https://github.com/pytorch/functorch).\n* Distributed Data Parallel (DDP) static graph optimizations available in stable.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
-{"page_content": "Introducing TorchData", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
-{"page_content": "We are delighted to present the Beta release of [TorchData](https://github.com/pytorch/data). This is a library of common modular data loading primitives for easily constructing flexible and performant data pipelines. Based on community feedback, we have found that the existing DataLoader bundled too many features together and can be difficult to extend. Moreover, different use cases often have to rewrite the same data loading utilities over and over again. The goal here is to enable composable data loading", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
-{"page_content": "through Iterable-style and Map-style building blocks called \u201c[DataPipes](https://github.com/pytorch/data#what-are-datapipes)\u201d that work well out of the box with the [PyTorch\u2019s DataLoader](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
-{"page_content": "A `DataPipe` takes in some access function over Python data structures, `__iter__` for `IterDataPipe` and `__getitem__` for `MapDataPipe`, and returns a new access function with a slight transformation applied. You can chain multiple DataPipes together to form a data pipeline that performs all the necessary data transformation.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
-{"page_content": "We have implemented over 50 DataPipes that provide different core functionalities, such as opening files, parsing texts, transforming samples, caching, shuffling, and batching. For users who are interested in connecting to cloud providers (such as Google Drive or AWS S3), the [fsspec](https://pytorch.org/data/0.3.0/torchdata.datapipes.iter.html#io-datapipes) and iopath DataPipes will allow you to do so. The documentation provides detailed explanations and usage examples of each", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
-{"page_content": "[IterDataPipe](https://pytorch.org/data/0.3.0/torchdata.datapipes.iter.html) and [MapDataPipe](https://pytorch.org/data/0.3.0/torchdata.datapipes.map.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
-{"page_content": "In this release, some of the PyTorch domain libraries have migrated their datasets to use DataPipes. In TorchText, the [popular datasets provided by the library](https://github.com/pytorch/text/tree/release/0.12/torchtext/datasets) are implemented using DataPipes and a [section of its SST-2 binary text classification tutorial](https://pytorch.org/text/0.12.0/tutorials/sst2_classification_non_distributed.html#dataset) demonstrates how you can use DataPipes to preprocess data for your model. There also are", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
-{"page_content": "other prototype implementations of datasets with DataPipes in [TorchVision (available in nightly releases)](https://github.com/pytorch/vision/tree/main/torchvision/prototype/datasets/_builtin) and in [TorchRec](https://pytorch.org/torchrec/torchrec.datasets.html). You can find more [specific examples here](https://pytorch.org/data/0.3.0/examples.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
-{"page_content": "The [documentation for TorchData](https://pytorch.org/data) is now live. It contains a tutorial that covers [how to use DataPipes](https://pytorch.org/data/0.3.0/tutorial.html#using-datapipes), [use them with DataLoader](https://pytorch.org/data/0.3.0/tutorial.html#working-with-dataloader), and [implement custom ones](https://pytorch.org/data/0.3.0/tutorial.html#implementing-a-custom-datapipe). FAQs and future plans related to DataLoader are described in [our project\u2019s README", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
-{"page_content": "file](https://github.com/pytorch/data#readme).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
-{"page_content": "Introducing functorch\n\nWe\u2019re excited to announce the first beta release of [functorch](https://github.com/pytorch/functorch). Heavily inspired by [Google JAX](https://github.com/google/jax), functorch is a library that adds composable function transforms to PyTorch. It aims to provide composable vmap (vectorization) and autodiff transforms that work with PyTorch modules and PyTorch autograd with good eager-mode performance.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
-{"page_content": "Composable function transforms can help with a number of use cases that are tricky to do in PyTorch today:\n\n* computing per-sample-gradients (or other per-sample quantities)\n* running ensembles of models on a single machine\n* efficiently batching together tasks in the inner-loop of MAML\n* efficiently computing Jacobians and Hessians as well as batched ones", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
-{"page_content": "Composing vmap (vectorization), vjp (reverse-mode AD), and jvp (forward-mode AD) transforms allows us to effortlessly express the above without designing a separate library for each.\n\nFor more details, please see our [documentation](https://pytorch.org/functorch/), [tutorials](https://pytorch.org/functorch), and [installation instructions](https://pytorch.org/functorch/stable/install.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
-{"page_content": "Distributed Training", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
-{"page_content": "(Stable) DDP static graph", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
-{"page_content": "DDP static graph assumes that your model employs the same set of used/unused parameters in every iteration, so that it can deterministically know states like which hooks will fire, how many times the hooks will fire and gradients computation ready order after the first iteration. Static graph caches these states in the first iteration, and thus it could support features that DDP can not support in previous releases, e.g., support multiple activation checkpoints on the same parameters regardless of whether", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
-{"page_content": "there are unused parameters or not. The static graph feature also applies performance optimizations when there are unused parameters, e.g., it avoids traversing graphs to search unused parameters every iteration, and enables dynamic bucketing order. These optimizations in the DDP static graph brought 10% QPS gain for some recommendation models.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
-{"page_content": "To enable static graph, just simply set static_graph=True in the DDP API like this:\n\n```\nddp_model = DistributedDataParallel(model, static_graph=True)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
-{"page_content": "For more details, please see our [documentation](https://pytorch.org/docs/master/generated/torch.nn.parallel.DistributedDataParallel.html) and [tutorials](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
-{"page_content": "Thanks for reading, If you\u2019re interested in these updates and want to join the PyTorch community, we encourage you to join the [discussion forums](https://discuss.pytorch.org/) and [open GitHub issues](https://github.com/pytorch/pytorch/issues). To get the latest news from PyTorch, follow us on [Twitter](https://twitter.com/PyTorch), [Medium](https://medium.com/pytorch), [YouTube](https://www.youtube.com/pytorch), and [LinkedIn](https://www.linkedin.com/company/pytorch).\n\nCheers!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
+{"page_content": "To circumvent this limitation, TorchVision offered [_custom implementations_](https://github.com/pytorch/vision/blob/main/references/detection/transforms.py) in its reference scripts that show-cased how one could perform augmentations in each task. Though this practice enabled us to train high accuracy [_classification_](https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/), [_object detection & segmentation_](https://pytorch.org/blog/pytorch-1.12-new-library-releases/#beta-object-detection-and-instance-segmentation) models, it was a hacky approach which made those transforms impossible to import from the TorchVision binary.\n\n## The new Transforms API\n\nThe Transforms V2 API supports videos, bounding boxes, and segmentation masks meaning that it offers native support for many Computer Vision tasks. The new solution is a drop-in replacement:\n\n```python\nimport torchvision.transforms.v2 as transforms", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
+{"page_content": "# Exactly the same interface as V1:\ntrans = transforms.Compose([\n transforms.ColorJitter(contrast=0.5),\n transforms.RandomRotation(30),\n transforms.CenterCrop(480),\n])\nimgs, bboxes, labels = trans(imgs, bboxes, labels)\n```\n\nThe new Transform Classes can receive any arbitrary number of inputs without enforcing specific order or structure:\n\n```python\n# Already supported:\ntrans(imgs) # Image Classification\ntrans(videos) # Video Tasks\ntrans(imgs, bboxes, labels) # Object Detection\ntrans(imgs, bboxes, masks, labels) # Instance Segmentation\ntrans(imgs, masks) # Semantic Segmentation\ntrans({\"image\": imgs, \"box\": bboxes, \"tag\": labels}) # Arbitrary Structure\n\n# Future support:\ntrans(imgs, bboxes, labels, keypoints) # Keypoint Detection\ntrans(stereo_images, disparities, masks) # Depth Perception\ntrans(image1, image2, optical_flows, masks) # Optical Flow\ntrans(imgs_or_videos, labels) # MixUp/CutMix-style Transforms\n```", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
+{"page_content": "The Transform Classes make sure that they apply the same random transforms to all the inputs to ensure consistent results.\n\nThe functional API has been updated to support all necessary signal processing kernels (resizing, cropping, affine transforms, padding etc) for all inputs:\n\n```python\nfrom torchvision.transforms.v2 import functional as F\n\n\n# High-level dispatcher, accepts any supported input type, fully BC\nF.resize(inpt, size=[224, 224])\n# Image tensor kernel\nF.resize_image_tensor(img_tensor, size=[224, 224], antialias=True) \n# PIL image kernel\nF.resize_image_pil(img_pil, size=[224, 224], interpolation=BILINEAR)\n# Video kernel\nF.resize_video(video, size=[224, 224], antialias=True) \n# Mask kernel\nF.resize_mask(mask, size=[224, 224])\n# Bounding box kernel\nF.resize_bounding_box(bbox, size=[224, 224], spatial_size=[256, 256])\n```", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
+{"page_content": "Under the hood, the API uses Tensor subclassing to wrap the input, attach useful meta-data and dispatch to the right kernel. For your data to be compatible with these new transforms, you can either use the provided dataset wrapper which should work with most of torchvision built-in datasets, or your can wrap your data manually into Datapoints:\n\n```python\nfrom torchvision.datasets import wrap_dataset_for_transforms_v2\nds = CocoDetection(..., transforms=v2_transforms)\nds = wrap_dataset_for_transforms_v2(ds) # data is now compatible with transforms v2!\n\n# Or wrap your data manually using the lower-level Datapoint classes:\nfrom torchvision import datapoints\n\nimgs = datapoints.Image(images)\nvids = datapoints.Video(videos)\nmasks = datapoints.Mask(target[\"masks\u201c])\nbboxes = datapoints.BoundingBox(target[\"boxes\u201c], format=\u201dXYXY\u201d, spatial_size=imgs.shape)\n```", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
+{"page_content": "In addition to the new API, we now provide importable implementations for several data augmentations that are used in SoTA research such as [_Large Scale Jitter_](https://github.com/pytorch/vision/blob/928b05cad36eadb13e169f03028767c8bcd1f21d/torchvision/transforms/v2/_geometry.py#L1109), [_AutoAugmentation_](https://github.com/pytorch/vision/blob/main/torchvision/transforms/v2/_auto_augment.py) methods and [_several_](https://github.com/pytorch/vision/blob/main/torchvision/transforms/v2/__init__.py) new Geometric, Color and Type Conversion transforms.\n\nThe API continues to support both PIL and Tensor backends for Images, single or batched input and maintains JIT-scriptability on both the functional and class APIs.. The new API has been [_verified_](https://github.com/pytorch/vision/pull/6433#issuecomment-1256741233) to achieve the same accuracy as the previous implementation.\n\n## An end-to-end example", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
+{"page_content": "Here is an example of the new API using the following [_image_](https://user-images.githubusercontent.com/5347466/195350223-8683ef25-1367-4292-9174-c15f85c7358e.jpg). It works both with PIL images and Tensors. For more examples and tutorials, [_take a look at our gallery!_](https://pytorch.org/vision/0.15/auto_examples/index.html)\n\n\n```python\nfrom torchvision import io, utils\nfrom torchvision import datapoints\nfrom torchvision.transforms import v2 as T\nfrom torchvision.transforms.v2 import functional as F", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
+{"page_content": "# Defining and wrapping input to appropriate Tensor Subclasses\npath = \"COCO_val2014_000000418825.jpg\"\nimg = datapoints.Image(io.read_image(path))\n# img = PIL.Image.open(path)\nbboxes = datapoints.BoundingBox(\n [[2, 0, 206, 253], [396, 92, 479, 241], [328, 253, 417, 332],\n [148, 68, 256, 182], [93, 158, 170, 260], [432, 0, 438, 26],\n [422, 0, 480, 25], [419, 39, 424, 52], [448, 37, 456, 62],\n [435, 43, 437, 50], [461, 36, 469, 63], [461, 75, 469, 94],\n [469, 36, 480, 64], [440, 37, 446, 56], [398, 233, 480, 304],\n [452, 39, 463, 63], [424, 38, 429, 50]],\n format=datapoints.BoundingBoxFormat.XYXY,\n spatial_size=F.get_spatial_size(img),\n)\nlabels = [59, 58, 50, 64, 76, 74, 74, 74, 74, 74, 74, 74, 74, 74, 50, 74, 74]\n# Defining and applying Transforms V2\ntrans = T.Compose(\n [\n T.ColorJitter(contrast=0.5),\n T.RandomRotation(30),\n T.CenterCrop(480),\n ]\n)\nimg, bboxes, labels = trans(img, bboxes, labels)\n# Visualizing results\nviz = utils.draw_bounding_boxes(F.to_image_tensor(img), boxes=bboxes)\nF.to_pil_image(viz).show()\n```", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
+{"page_content": "## Development milestones and future work\n\nHere is where we are in development:\n\n- [x] Design API\n- [x] Write Kernels for transforming Videos, Bounding Boxes, Masks and Labels\n- [x] Rewrite all existing Transform Classes (stable + references) on the new API:\n - [x] Image Classification\n - [x] Video Classification\n - [x] Object Detection\n - [x] Instance Segmentation\n - [x] Semantic Segmentation\n- [x] Verify the accuracy of the new API for all supported Tasks and Backends\n- [x] Speed Benchmarks and Performance Optimizations (in progress - planned for Dec)\n- [x] Graduate from Prototype (planned for Q1)\n- [ ] Add support of Depth Perception, Keypoint Detection, Optical Flow and more (future)\n- [ ] Add smooth support for batch-wise transforms like MixUp and CutMix\n\n\nWe would love to get [_feedback_](https://github.com/pytorch/vision/issues/6753) from you to improve its functionality. Please reach out to us if you have any questions or suggestions.", "metadata": {"source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"New Library Updates in PyTorch 1.13\"\nauthor: Team PyTorch\nfeatured-img: \"assets/images/new-library-updates-in-pytorch-1.13-2.jpg\"\n---\n\n## Summary\n\nWe are bringing a number of improvements to the current PyTorch libraries, alongside the PyTorch 1.13 [release](https://github.com/pytorch/pytorch/releases). These updates demonstrate our focus on developing common and extensible APIs across all domains to make it easier for our community to build ecosystem projects on PyTorch.\n\nAlong with **1.13**, we are releasing updates to the PyTorch Libraries, please find them below.\n\n### TorchAudio \n\n#### (Beta) Hybrid Demucs Model and Pipeline\n\nHybrid Demucs is a music source separation model that uses both spectrogram and time domain features. It has demonstrated state-of-the-art performance in the Sony\u00ae Music DeMixing Challenge. (citation: [https://arxiv.org/abs/2111.03600](https://arxiv.org/abs/2111.03600))\n\nThe TorchAudio v0.13 release includes the following features", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "- MUSDB_HQ Dataset, which is used in Hybrid Demucs training ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.MUSDB_HQ.html#torchaudio.datasets.MUSDB_HQ))\n- Hybrid Demucs model architecture ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.models.HDemucs.html#torchaudio.models.HDemucs))\n- Three factory functions suitable for different sample rate ranges\n- Pre-trained pipelines ([docs](https://pytorch.org/audio/0.13.0/pipelines.html#id46))\n- SDR Results of pre-trained pipelines on MUSDB_HQ test set\n- Tutorial that steps through music source separation using the pretrained pipeline ([docs](https://pytorch.org/audio/0.13.0/tutorials/hybrid_demucs_tutorial.html))", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "| Pipeline | All | Drums | Bass | Other | Vocals |\n|----------------------------------------|-------|-------|--------|-------|--------|\n| HDEMUCS_HIGH_MUSDB* | 6.42 | 7.76 | 6.51 | 4.47 | 6.93 |\n| HDEMUCS_HIGH_MUSDB_PLUS** | 9.37 | 11.38 | 10.53 | 7.24 | 8.32 |\n\n* Trained on the training data of MUSDB-HQ dataset.
** Trained on both training and test sets of MUSDB-HQ and 150 extra songs from an internal database that were specifically produced for Meta.
\n\n```python\nfrom torchaudio.pipelines import HDEMUCS_HIGH_MUSDB_PLUS\n\nbundle = HDEMUCS_HIGH_MUSDB_PLUS\nmodel = bundle.get_model()\nsources_list = model.sources\n\nmixture, samplerate = torchaudio.load(\"song.wav\")\nsources = model(mixture)\naudios = dict(zip(sources_list, sources)\n```\n\nSpecial thanks to Alexandre Defossez for the guidance.\n\n#### (Beta) Datasets and Metadata Mode for SUPERB Benchmark", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "TorchAudio adds support for various audio-related datasets used in downstream tasks for benchmarking self-supervised learning models. With the addition of several new datasets, there is now support for the downstream tasks in version 1 of the [SUPERB benchmark](https://superbbenchmark.org/), which can be found in the [s3prl repository](https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/docs/superb.md).\n\nFor these datasets, we also add metadata support through a `get_metadata` function, enabling faster dataset iteration or preprocessing without the need to load waveforms. The function returns the same features as `__getitem__`, except it returns the relative waveform path rather than the loaded waveform.\n\nDatasets with metadata functionality", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "- LIBRISPEECH ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.LIBRISPEECH.html#torchaudio.datasets.LIBRISPEECH))\n- LibriMix ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.LibriMix.html#torchaudio.datasets.LibriMix))\n- QUESST14 ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.QUESST14.html#torchaudio.datasets.QUESST14))\n- SPEECHCOMMANDS ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.SPEECHCOMMANDS.html#torchaudio.datasets.SPEECHCOMMANDS))\n- (new) FluentSpeechCommands ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.FluentSpeechCommands.html#torchaudio.datasets.FluentSpeechCommands))\n- (new) Snips ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.Snips.html#torchaudio.datasets.Snips))\n- (new) IEMOCAP ([docs](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.IEMOCAP.html#torchaudio.datasets.IEMOCAP))\n- (new) VoxCeleb1 ([Identification](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.VoxCeleb1Identification.html#torchaudio.datasets.VoxCeleb1Identification), [Verification](https://pytorch.org/audio/0.13.0/generated/torchaudio.datasets.VoxCeleb1Verification.html#torchaudio.datasets.VoxCeleb1Verification))", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "#### (Beta) Custom Language Model support in CTC Beam Search Decoding\n\nTorchAudio released a CTC beam search decoder in release 0.12, with KenLM language model support. This release, there is added functionality for creating custom Python language models that are compatible with the decoder, using the `torchaudio.models.decoder.CTCDecoderLM` wrapper.\n\nFor more information on using a custom language model, please refer to the [documentation](https://pytorch.org/audio/0.13.0/generated/torchaudio.models.decoder.CTCDecoder.html#ctcdecoderlm) and [tutorial](https://pytorch.org/audio/0.13.0/tutorials/asr_inference_with_ctc_decoder_tutorial.html#custom-language-model).\n\n#### (Beta) StreamWriter\n\ntorchaudio.io.StreamWriter is a class for encoding media including audio and video. This can handle a wide variety of codecs, chunk-by-chunk encoding and GPU encoding.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "```python\nwriter = StreamWriter(\"example.mp4\")\nwriter.add_audio_stream(\n sample_rate=16_000,\n num_channels=2,\n)\nwriter.add_video_stream(\n frame_rate=30,\n height=96,\n width=128,\n format=\"rgb24\",\n)\nwith writer.open():\n writer.write_audio_chunk(0, audio)\n writer.write_video_chunk(1, video)\n```\n\nFor more information, refer to [the documentation](https://pytorch.org/audio/0.13.0/generated/torchaudio.io.StreamWriter.html) and the following tutorials\n- [StreamWriter Basic Usage](https://pytorch.org/audio/0.13.0/tutorials/streamwriter_basic_tutorial.html)\n- [StreamWriter Advanced Usage](https://pytorch.org/audio/0.13.0/tutorials/streamwriter_advanced.html)\n- [Hardware-Accelerated Video Decoding and Encoding](https://pytorch.org/audio/0.13.0/hw_acceleration_tutorial.html)\n\n### TorchData\n\nFor a complete list of changes and new features, please visit [our repository\u2019s 0.5.0 release note](https://github.com/pytorch/data/releases).\n\n#### (Prototype) DataLoader2", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "`DataLoader2` was introduced in the last release to execute `DataPipe` graph, with support for dynamic sharding for multi-process/distributed data loading, multiple backend ReadingServices, and `DataPipe` graph in-place modification (e.g. shuffle control).\n\nIn this release, we further consolidated the API for `DataLoader2` and a [detailed documentation is now available here](https://pytorch.org/data/0.5/dataloader2.html). We continue to welcome early adopters and feedback, as well as potential contributors. If you are interested in trying it out, we encourage you to install the nightly version of TorchData.\n\n#### (Beta) Data Loading from Cloud Service Providers\n\nWe extended our support to load data from additional cloud storage providers via DataPipes, now covering AWS, Google Cloud Storage, and Azure. A [tutorial is also available](https://pytorch.org/data/0.5/tutorial.html#working-with-cloud-storage-providers). We are open to feedback and feature requests.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "We also performed a simple benchmark, comparing the performance of data loading from AWS S3 and attached volume on an AWS EC2 instance. The results are [visible here](https://github.com/pytorch/data/blob/gh/NivekT/100/head/benchmarks/cloud/aws_s3_results.md).\n\n### torch::deploy (Beta)\n\ntorch::deploy is now in Beta! torch::deploy is a C++ library for Linux based operating systems that allows you to run multiple Python interpreters in a single process. You can run your existing eager PyTorch models without any changes for production inference use cases. Highlights include: \n\n- Existing models work out of the box\u2013no need to modify your python code to support tracing.\n- Full support for your existing Python environment including C extensions.\n- No need to cross process boundaries to load balance in multi-GPU serving environments.\n- Model weight can be shared between multiple Python interpreters.\n- A vastly improved installation and setup process.\n\n```Python\ntorch::deploy::InterpreterManager manager(4);", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "// access one of the 4 interpreters\nauto I = manager.acquireOne();\n\n// run infer from your_model.py\nI.global(\"your_model\", \"infer\")({at::randn({10, 240, 320})});\n```\n\nLearn more [here](https://github.com/pytorch/multipy).\n\n#### (Beta) CUDA/ROCm/CPU Backends\n\ntorch::deploy now links against standard PyTorch Python distributions so all accelerators that PyTorch core supports such as CUDA and AMD/HIP work out of the box.\n\n- Can install any device variant of PyTorch via pip/conda like normal.\n- [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/)\n\n#### (Prototype) aarch64/arm64 support\n\ntorch::deploy now has basic support for aarch64 Linux systems.\n\n- We're looking to gather feedback on it and learn more about arm use cases for eager PyTorch models.\n- Learn more / share your use case at [https://github.com/pytorch/multipy/issues/64](https://github.com/pytorch/multipy/issues/64)\n\n### TorchEval\n\n#### (Prototype) Introducing Native Metrics Support for PyTorch", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "TorchEval is a library built for users who want highly performant implementations of common metrics to evaluate machine learning models. It also provides an easy to use interface for building custom metrics with the same toolkit. Building your metrics with TorchEval makes running distributed training loops with [torch.distributed](https://pytorch.org/docs/stable/distributed.html) a breeze.\n\nLearn more with our [docs](https://pytorch.org/torcheval), see our [examples](https://pytorch.org/torcheval/metric_example.html), or check out our [GitHub repo](http://github.com/pytorch/torcheval).\n\n### TorchMultimodal Release (Beta)\n\nPlease watch for upcoming blogs in early November that will introduce TorchMultimodal, a PyTorch domain library for training SoTA multi-task multimodal models at scale, in more details; in the meantime, play around with the library and models through our [tutorial](https://pytorch.org/tutorials/beginner/flava_finetuning_tutorial.html).\n\n### TorchRec", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "### TorchRec\n\n#### (Prototype) Simplified Optimizer Fusion APIs\n\nWe\u2019ve provided a simplified and more intuitive API for setting fused optimizer settings via apply_optimizer_in_backward. This new approach enables the ability to specify optimizer settings on a per-parameter basis and sharded modules will configure [FBGEMM\u2019s TableBatchedEmbedding modules accordingly](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/fbgemm_gpu/split_table_batched_embeddings_ops.py#L181). Additionally, this now let's TorchRec\u2019s planner account for optimizer memory usage. This should alleviate reports of sharding jobs OOMing after using Adam using a plan generated from planner.\n\n#### (Prototype) Simplified Sharding APIs", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "We\u2019re introducing the shard API, which now allows you to shard only the embedding modules within a model, and provides an alternative to the current main entry point - DistributedModelParallel. This lets you have a finer grained control over the rest of the model, which can be useful for customized parallelization logic, and inference use cases (which may not require any parallelization on the dense layers). We\u2019re also introducing construct_module_sharding_plan, providing a simpler interface to the TorchRec sharder.\n\n#### (Beta) Quantized Comms", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "Applying [quantization or mixed precision](https://dlp-kdd.github.io/assets/pdf/a11-yang.pdf) to tensors in a collective call during model parallel training greatly improves training efficiency, with little to no effect on model quality. TorchRec now integrates with the [quantized comms library provided by FBGEMM GPU](https://github.com/pytorch/FBGEMM/blob/main/fbgemm_gpu/fbgemm_gpu/quantize_comm.py) and provides an interface to construct encoders and decoders (codecs) that surround the all_to_all, and reduce_scatter collective calls in the output_dist of a sharded module. We also allow you to construct your own codecs to apply to your sharded module. The codces provided by FBGEMM allow FP16, BF16, FP8, and INT8 compressions, and you may use different quantizations for the forward pass and backward pass.\n\n### TorchSnapshot (Beta)", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "Along with PyTorch 1.13, we are releasing the beta version of TorchSnapshot, which is a performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind. Highlights include:\n\n- Performance: TorchSnapshot provides a fast checkpointing implementation employing various optimizations, including zero-copy serialization for most tensor types, overlapped device-to-host copy and storage I/O, parallelized storage I/O\n- Memory Use: TorchSnapshot's memory usage adapts to the host's available resources, greatly reducing the chance of out-of-memory issues when saving and loading checkpoints\n- Usability: Simple APIs that are consistent between distributed and non-distributed workloads\n\nLearn more with our [tutorial](https://pytorch.org/torchsnapshot/main/getting_started.html).\n\n### TorchVision", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "### TorchVision \n\nWe are happy to introduce torchvision v0.14 [(release note)](https://github.com/pytorch/vision/releases). This version introduces a new [model registration API](https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/) to help users retrieving and listing models and weights. It also includes new image and video classification models such as MViT, S3D, Swin Transformer V2, and MaxViT. Last but not least, we also have new primitives and augmentation such as PolynomicalLR scheduler and SimpleCopyPaste.\n\n#### (Beta) Model Registration API", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "Following up on the [multi-weight support API](https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/) that was released on the previous version, we have added a new [model registration API](https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/) to help users retrieve models and weights. There are now 4 new methods under the torchvision.models module: get_model, get_model_weights, get_weight, and list_models. Here are examples of how we can use them:\n\n```Python\nimport torchvision\nfrom torchvision.models import get_model, get_model_weights, list_models\n\n\nmax_params = 5000000\n\ntiny_models = []\nfor model_name in list_models(module=torchvision.models):\n weights_enum = get_model_weights(model_name)\n if len([w for w in weights_enum if w.meta[\"num_params\"] <= max_params]) > 0:\n tiny_models.append(model_name)\n\nprint(tiny_models)\n# ['mnasnet0_5', 'mnasnet0_75', 'mnasnet1_0', 'mobilenet_v2', ...]", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "model = get_model(tiny_models[0], weights=\"DEFAULT\")\nprint(sum(x.numel() for x in model.state_dict().values()))\n# 2239188\n```\n\n#### (Beta) New Video Classification Models\n\nWe added two new video classification models, MViT and S3D. MViT is a state of the art video classification transformer model which has 80.757% accuracy on the Kinetics400 dataset, while S3D is a relatively small model with good accuracy for its size. These models can be used as follows:\n\n```Python\nimport torch\nfrom torchvision.models.video import *\n\nvideo = torch.rand(3, 32, 800, 600)\nmodel = mvit_v2_s(weights=\"DEFAULT\")\n# model = s3d(weights=\"DEFAULT\")\nmodel.eval()\nprediction = model(images)\n```\n\nHere is the table showing the accuracy of the new video classification models tested in the Kinetics400 dataset.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "| **Model** | **Acc@1** | **Acc@5** |\n|--------------------------------|-----------|-----------|\n| mvit_v1_b | 81.474 | 95.776 |\n| mvit_v2_s | 83.196 | 96.36 |\n| s3d | 83.582 | 96.64 |\n\nWe would like to thank Haoqi Fan, Yanghao Li, Christoph Feichtenhofer and Wan-Yen Lo for their work on [PyTorchVideo](https://github.com/facebookresearch/pytorchvideo/) and their support during the development of the MViT model. We would like to thank Sophia Zhi for her contribution implementing the S3D model in torchvision.\n\n#### (Stable) New Architecture and Model Variants\n\nFor Classification Models, we\u2019ve added the Swin Transformer V2 architecture along with pre-trained weights for its tiny/small/base variants. In addition, we have added support for the MaxViT transformer. Here is an example on how to use the models:\n\n```Python\nimport torch\nfrom torchvision.models import *", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "image = torch.rand(1, 3, 224, 224)\nmodel = swin_v2_t(weights=\"DEFAULT\").eval()\n# model = maxvit_t(weights=\"DEFAULT\").eval()\nprediction = model(image)\n```\n\nHere is the table showing the accuracy of the models tested on ImageNet1K dataset.\n\n| **Model** | **Acc@1** | **Acc@1 change over V1** | **Acc@5** | **Acc@5 change over V1** |\n|---------------|-----------|--------------------------|-----------|--------------------------|\n| swin_v2_t | 82.072 | + 0.598 | 96.132 | + 0.356 |\n| swin_v2_s | 83.712 | + 0.516 | 96.816 | + 0.456 |\n| swin_v2_b | 84.112 | + 0.530 | 96.864 | + 0.224 |\n| maxvit_t | 83.700 | - | 96.722 | - |\n\nWe would like to thank [Ren Pang](https://github.com/ain-soph) and [Teodor Poncu](https://github.com/TeodorPoncu) for contributing the 2 models to torchvision.\n\n### (Stable) New Primitives & Augmentations", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "In this release we\u2019ve added the [SimpleCopyPaste](https://arxiv.org/abs/2012.07177) augmentation in our reference scripts and we up-streamed the PolynomialLR scheduler to PyTorch Core. We would like to thank [Lezwon Castelino](https://github.com/lezwon) and [Federico Pozzi](https://github.com/federicopozzi33) for their contributions. We are continuing our efforts to modernize TorchVision by adding more SoTA primitives, Augmentations and architectures with the help of our community. If you are interested in contributing, have a look at the following [issue](https://github.com/pytorch/vision/issues/6323).\n\n### Torch-TensorRT\n\n#### (Prototype) TensorRT with FX2TRT frontend\n\nTorch-TensorRT is the PyTorch integration for TensorRT, providing high performance inference on NVIDIA GPUs. Torch-TRT allows for optimizing models directly in PyTorch for deployment providing up to 6x performance improvement.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "Torch-TRT is an AoT compiler which ingests an nn.Module or TorchScript module, optimizes compatible subgraphs in TensorRT & leaves the rest to run in PyTorch. This gives users the performance of TensorRT, but the usability and familiarity of Torch.\n\nTorch-TensorRT is part of the PyTorch ecosystem, and was released as v1.0 in November \u201821. There are currently two distinct front-ends: Torchscript & FX. Each provides the same value proposition and underlying operation with the primary difference being the input & output formats (TS vs FX / Python).\n\nThe Torchscript front-end was included in v1.0 and should be considered stable. The FX front-end is first released in v1.2 and should be considered a Beta.\n\nRelevant Links:", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "Relevant Links:\n\n- [Github](https://github.com/pytorch/TensorRT)\n- [Documentation](https://pytorch.org/TensorRT/)\n- [Generic (TS) getting started guide](https://pytorch.org/TensorRT/getting_started/getting_started_with_python_api.html)\n- [FX getting started guide](https://pytorch.org/TensorRT/tutorials/getting_started_with_fx_path.html)\n\n#### (Stable) Introducing Torch-TensorRT", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "Torch-TensorRT is an integration for PyTorch that leverages inference optimizations of TensorRT on NVIDIA GPUs. It takes advantage of TensorRT optimizations, such as FP16 and INT8 reduced precision, graph optimization, operation fusion, etc. while offering a fallback to native PyTorch when TensorRT does not support the model subgraphs. Currently, there are two frontend paths existing in the library that help to convert a PyTorch model to tensorRT engine. One path is through Torch Script (TS) and the other is through FX frontend. That being said, the models are traced by either TS or FX into their IR graph and then converted to TensorRT from it.\n\nLearn more with our [tutorial](https://pytorch.org/TensorRT/).\n\n### TorchX\n\nTorchX 0.3 updates include a new list API, experiment tracking, elastic training and improved scheduler support. There\u2019s also a new Multi-Objective NAS [tutorial](https://pytorch.org/tutorials/intermediate/ax_multiobjective_nas_tutorial.html) using TorchX + Ax.\n\n#### (Prototype) List", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "The newly added list command and API allows you to list recently launched jobs and their statuses for a given scheduler directly from within TorchX.\n\n- This removes the need for using secondary tools to list the jobs.\n- Full programmatic access to recent jobs for integration with custom tools.\n\n```Python\n$ torchx list -s kubernetes\nAPP HANDLE APP STATUS\n----------------------------------------------- -----------------\nkubernetes://torchx/default:train-f2nx4459p5crr SUCCEEDED\n```\n\nLearn more with our [documentation](https://pytorch.org/torchx/main/schedulers.html#torchx.schedulers.Scheduler.list).\n\n#### (Prototype) Tracker\n\nTorchX Tracker is a new prototype library that provides a flexible and customizable experiment and artifact tracking interface. This allows you to track inputs and outputs for jobs across multiple steps to make it easier to use TorchX with pipelines and other external systems.\n\n```Python\nfrom torchx import tracker", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "app_run = tracker.app_run_from_env()\napp_run.add_metadata(lr=lr, gamma=gamma) # hyper parameters\napp_run.add_artifact(\"model\", \"storage://path/mnist_cnn.pt\") # logs / checkpoints\napp_run.add_source(parent_run_id, \"model\") # lineage\n```\n\nExample:\n\n- [https://github.com/pytorch/torchx/tree/main/torchx/examples/apps/tracker](https://github.com/pytorch/torchx/tree/main/torchx/examples/apps/tracker)\n- [https://pytorch.org/torchx/main/tracker.html](https://pytorch.org/torchx/main/tracker.html)\n\n#### (Prototype) Elastic Training and Autoscaling\n\nElasticity on Ray and Kubernetes \u2013 automatic scale up of distributed training jobs when using a supported scheduler. Learn more with our [documentation](https://pytorch.org/torchx/main/components/distributed.html).\n\n#### (Prototype) Scheduler Improvements: IBM\u00ae Spectrum LSF\n\nAdded prototype support for the IBM Spectrum LSF scheduler.\n\n#### (Beta) AWS Batch Scheduler\n\nThe AWS Batch scheduler integration is now in beta.", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "- log fetching and listing jobs is now supported.\n- Added configs for job priorities and queue policies\n- Easily access job UI via ui_url\n[https://pytorch.org/torchx/main/schedulers/aws_batch.html](https://pytorch.org/torchx/main/schedulers/aws_batch.html)\n\n#### (Prototype) AnyPrecision Optimizer \n\nDrop in replacement for AdamW optimizer that reduces GPU memory, enables two main features:\n\n- Ability to successfully train the entire model pipeline in full BFloat16.\nKahan summation ensures precision. This can improve training throughput, especially on huge models, by reduced memory and increased computation speed.\n- Ability to change the variance state to BFloat16. This can reduce overall memory required for model training with additional speed improvements.\n\nFind more information [here](https://github.com/pytorch/torchdistx/pull/52).", "metadata": {"source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"PyTorch 1.11, TorchData, and functorch are now available\"\nauthor: Team PyTorch\nfeatured-img: \"assets/images/pytorch-logo.jpg\"\n---\n\nWe are excited to announce the release of PyTorch 1.11 ([release notes](https://github.com/pytorch/pytorch/releases/tag/v1.11.0)). This release is composed of over 3,300 commits since 1.10, made by 434 contributors. Along with 1.11, we are releasing beta versions of TorchData and functorch.\n\nSummary:\n\n* **TorchData** is a new library for common modular data loading primitives for easily constructing flexible and performant data pipelines. [View it on GitHub](https://github.com/pytorch/data).\n* **functorch**, a library that adds composable function transforms to PyTorch, is now available in beta. [View it on GitHub](https://github.com/pytorch/functorch).\n* Distributed Data Parallel (DDP) static graph optimizations available in stable.\n\n## Introducing TorchData", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
+{"page_content": "We are delighted to present the Beta release of [TorchData](https://github.com/pytorch/data). This is a library of common modular data loading primitives for easily constructing flexible and performant data pipelines. Based on community feedback, we have found that the existing DataLoader bundled too many features together and can be difficult to extend. Moreover, different use cases often have to rewrite the same data loading utilities over and over again. The goal here is to enable composable data loading through Iterable-style and Map-style building blocks called \u201c[DataPipes](https://github.com/pytorch/data#what-are-datapipes)\u201d that work well out of the box with the [PyTorch\u2019s DataLoader](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
+{"page_content": "A `DataPipe` takes in some access function over Python data structures, `__iter__` for `IterDataPipe` and `__getitem__` for `MapDataPipe`, and returns a new access function with a slight transformation applied. You can chain multiple DataPipes together to form a data pipeline that performs all the necessary data transformation.\n\nWe have implemented over 50 DataPipes that provide different core functionalities, such as opening files, parsing texts, transforming samples, caching, shuffling, and batching. For users who are interested in connecting to cloud providers (such as Google Drive or AWS S3), the [fsspec](https://pytorch.org/data/0.3.0/torchdata.datapipes.iter.html#io-datapipes) and iopath DataPipes will allow you to do so. The documentation provides detailed explanations and usage examples of each [IterDataPipe](https://pytorch.org/data/0.3.0/torchdata.datapipes.iter.html) and [MapDataPipe](https://pytorch.org/data/0.3.0/torchdata.datapipes.map.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
+{"page_content": "In this release, some of the PyTorch domain libraries have migrated their datasets to use DataPipes. In TorchText, the [popular datasets provided by the library](https://github.com/pytorch/text/tree/release/0.12/torchtext/datasets) are implemented using DataPipes and a [section of its SST-2 binary text classification tutorial](https://pytorch.org/text/0.12.0/tutorials/sst2_classification_non_distributed.html#dataset) demonstrates how you can use DataPipes to preprocess data for your model. There also are other prototype implementations of datasets with DataPipes in [TorchVision (available in nightly releases)](https://github.com/pytorch/vision/tree/main/torchvision/prototype/datasets/_builtin) and in [TorchRec](https://pytorch.org/torchrec/torchrec.datasets.html). You can find more [specific examples here](https://pytorch.org/data/0.3.0/examples.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
+{"page_content": "The [documentation for TorchData](https://pytorch.org/data) is now live. It contains a tutorial that covers [how to use DataPipes](https://pytorch.org/data/0.3.0/tutorial.html#using-datapipes), [use them with DataLoader](https://pytorch.org/data/0.3.0/tutorial.html#working-with-dataloader), and [implement custom ones](https://pytorch.org/data/0.3.0/tutorial.html#implementing-a-custom-datapipe). FAQs and future plans related to DataLoader are described in [our project\u2019s README file](https://github.com/pytorch/data#readme).\n\n## Introducing functorch\n\nWe\u2019re excited to announce the first beta release of [functorch](https://github.com/pytorch/functorch). Heavily inspired by [Google JAX](https://github.com/google/jax), functorch is a library that adds composable function transforms to PyTorch. It aims to provide composable vmap (vectorization) and autodiff transforms that work with PyTorch modules and PyTorch autograd with good eager-mode performance.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
+{"page_content": "Composable function transforms can help with a number of use cases that are tricky to do in PyTorch today:\n\n* computing per-sample-gradients (or other per-sample quantities)\n* running ensembles of models on a single machine\n* efficiently batching together tasks in the inner-loop of MAML\n* efficiently computing Jacobians and Hessians as well as batched ones\n\nComposing vmap (vectorization), vjp (reverse-mode AD), and jvp (forward-mode AD) transforms allows us to effortlessly express the above without designing a separate library for each.\n\nFor more details, please see our [documentation](https://pytorch.org/functorch/), [tutorials](https://pytorch.org/functorch), and [installation instructions](https://pytorch.org/functorch/stable/install.html).\n\n## Distributed Training\n\n### (Stable) DDP static graph", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
+{"page_content": "DDP static graph assumes that your model employs the same set of used/unused parameters in every iteration, so that it can deterministically know states like which hooks will fire, how many times the hooks will fire and gradients computation ready order after the first iteration. Static graph caches these states in the first iteration, and thus it could support features that DDP can not support in previous releases, e.g., support multiple activation checkpoints on the same parameters regardless of whether there are unused parameters or not. The static graph feature also applies performance optimizations when there are unused parameters, e.g., it avoids traversing graphs to search unused parameters every iteration, and enables dynamic bucketing order. These optimizations in the DDP static graph brought 10% QPS gain for some recommendation models.\n\nTo enable static graph, just simply set static_graph=True in the DDP API like this:\n\n```\nddp_model = DistributedDataParallel(model, static_graph=True)\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
+{"page_content": "For more details, please see our [documentation](https://pytorch.org/docs/master/generated/torch.nn.parallel.DistributedDataParallel.html) and [tutorials](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html).\n\nThanks for reading, If you\u2019re interested in these updates and want to join the PyTorch community, we encourage you to join the [discussion forums](https://discuss.pytorch.org/) and [open GitHub issues](https://github.com/pytorch/pytorch/issues). To get the latest news from PyTorch, follow us on [Twitter](https://twitter.com/PyTorch), [Medium](https://medium.com/pytorch), [YouTube](https://www.youtube.com/pytorch), and [LinkedIn](https://www.linkedin.com/company/pytorch).\n\nCheers!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"}}
{"page_content": "---\nlayout: blog_detail\ntitle: \"Torchserve Performance Tuning, Animated Drawings Case-Study\"\nauthor: Hamid Shojanazeri, Geeta Chauhan, Mark Saroufim, Jesse Smith\nfeatured-img: \"assets/images/sketch_animator.png\"\n---", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "In this post we discuss performance tuning of Torchserve for serving your models in production. One of the biggest challenges in the life cycle of a ML project is deploying models in production. This requires a reliable serving solution along with solutions that address the MLOps needs. A robust serving solution needs to provide support for multi model serving, model versioning, metric logging, monitoring and scaling to serve the peak traffic. In this post, we will have an overview of Torchserve and how to", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "tune its performance for production use-cases. We discuss the [Animated Drawings app](https://ai.facebook.com/blog/using-ai-to-bring-childrens-drawings-to-life/) from Meta that can turn your human figure sketches to animations and how it could serve the peak traffic with Torchserve. The Animated Drawing\u2019s workflow is below.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n[https://ai.facebook.com/blog/using-ai-to-bring-childrens-drawings-to-life/](https://ai.facebook.com/blog/using-ai-to-bring-childrens-drawings-to-life/)", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "Many AI systems and tools are designed to handle realistic images of humans, children's drawings add a level of complexity and unpredictability as they are often constructed in abstract, fanciful ways. These types of morphological and stylistic variations can confuse even state-of-the-art AI systems that excel at spotting objects in photorealistic images and drawings.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "Meta AI researchers are working to overcome this challenge so that AI systems will be better able to recognize drawings of human figures in the wildly varied ways that children create them. This great blog post provides more details about the Animated Drawings and the approach taken.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "Torchserve\n\n\n
\n
Fig1. Overall flow of Torchserve performance tuning \n\n\nOnce you have trained your model, it needs to be integrated into a larger system to have a full-fledged application, we use the term \u201cmodel serving\u201d to refer to this integration. Basically model serving is making your trained model available to run inferences and subsequent use of the model.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "Torchserve is the Pytorch preferred solution for serving models in production. It is a performant and scalable tool that wraps your model in a HTTP or HTTPS API. It has a frontend implemented in Java that handles multiple tasks from assigning workers for serving models to handling the connection between client and server. Torchserve has a Python backend that is responsible for handling the inference service.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "Torchserve supports multi model serving and versioning for AB test, dynamic batching, logging and metrics. It exposes four APIs for [inference](https://github.com/pytorch/serve/blob/master/docs/inference_api.md), [explanations](https://github.com/pytorch/serve/blob/master/docs/inference_api.md#explanations-api), [management](https://github.com/pytorch/serve/blob/master/docs/management_api.md) and [metrics](https://github.com/pytorch/serve/blob/master/docs/metrics_api.md).", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "[Inference](https://github.com/pytorch/serve/blob/master/docs/inference_api.md) API is listening on port 8080 and accessible through localhost by default, this can be configured in [Torchserve configuration](https://github.com/pytorch/serve/blob/master/docs/configuration.md) and enable getting predictions from the model.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "[Explanation](https://github.com/pytorch/serve/blob/master/docs/inference_api.md#explanations-api) API uses Captum under the hood to provide explanations of the model that is being served and listens to the port 8080 as well.\n\n[Management](https://github.com/pytorch/serve/blob/master/docs/management_api.md#management-api) API allows to register or unregister and describe a model. It also enables users to scale up or down the number of workers that serve the model.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "[Metric](https://github.com/pytorch/serve/blob/master/docs/metrics_api.md) API by default listens to port 8082 and enables us to monitor the model that is being served.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "Torchserve let you scale your model serving and handle the peak traffic by supporting [batch inference](https://github.com/pytorch/serve/blob/master/docs/batch_inference_with_ts.md) and multiple workers that serve your model. Scaling can be done through [management](https://github.com/pytorch/serve/blob/master/docs/management_api.md) API and settings through a [configuration](https://github.com/pytorch/serve/blob/master/docs/configuration.md) file. Also, metric API helps you to monitor your model serving", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "through default and customizable metrics.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "Other advanced settings such as the length of the queue for the received requests, maximum wait time for a batch of inputs and many other properties are configurable through a[ config file](https://github.com/pytorch/serve/blob/master/docs/configuration.md) that can be passed to Torchserve when it is started.\n\n\n**Steps to serve your model with Torchserve**", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "1. [Install Torchserve, model archiver](https://github.com/pytorch/serve/blob/master/docs/getting_started.md#install-torchserve-and-torch-model-archiver) and its requirements.\n2. Choose a default handler that fits your task (e.g image classification, etc) or author a [custom handler](https://github.com/pytorch/serve/blob/master/docs/custom_service.md#custom-handlers).", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "3. [Package your model](https://github.com/pytorch/serve/tree/master/examples/Huggingface_Transformers#create-model-archive-eager-mode) artifacts (trained model checkpoint and all other necessary files for loading and running your model) and the handler into a \u201c.mar\u201d file using [Torcharchive](https://github.com/pytorch/serve/blob/master/model-archiver/README.md) and place it in the model store.\n4. [Start serving your model](https://github.com/pytorch/serve/blob/master/docs/getting_started.md).", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "5. [Run inference](https://github.com/pytorch/serve/blob/master/docs/getting_started.md#get-predictions-from-a-model).\nWe will discuss model handlers and metrics in more detail here.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "Model handlers\n\nTorchserve uses a handler in the backend to load the models, preprocess the received data, run inference and post-process the response. Handler in torchserve is a **python script** that all the model initialization, preprocessing, inference and post processing logic goes into.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "Torchserve provides an out of the box handler for a number of applications like image classification, segmentation, object detection and text classification. It also supports custom handlers, in case your use case is not supported in default handlers.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "It provides a great flexibility in custom handlers, this potentially make Torchserve as **multi-framework** serving tool. Custom handlers let you define your custom logic to initialize a model that can be used also to load models from other frameworks such as ONNX.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "Torchserve **handler** is made of four main **functions**, **initialize**, **preprocess**, **inference** and **postprocess** that each return a list. The code snippet below shows an example of a custom handler.**Custom handlers inherit** from **BaseHandler** in Torchserve and can **overwrite** any of the **main** **functions**. Here is an example of the handler used for loading the [Detectron2](https://github.com/facebookresearch/detectron2) model for figure detection, this model has been exported to", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "Torchscript and uses model.half() to run the inference with FP16, details are explained in another [section]() in this post.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "```python\n\nclass MyModelHandler(BaseHandler):\n def initialize(self, context):\n self.manifest = ctx.manifest\n properties = ctx.system_properties\n model_dir = properties.get(\"model_dir\")\n serialized_file = self.manifest[\"model\"][\"serializedFile\"]\n model_pt_path = os.path.join(model_dir, serialized_file)", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "self.device = torch.device(\n \"cuda:\" + str(properties.get(\"gpu_id\"))\n if torch.cuda.is_available() and properties.get(\"gpu_id\") is not None\n else \"cpu\"\n )\n self.model = torch.jit.load(model_pt_path, map_location=self.device)\n\n self.model = self.model.half()\n\n def preprocess(self, data):\n\n inputs = []\n for request in batch:\n\n request_body = request.get(\"body\")", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "input_ = io.BytesIO(request_body)\n image = cv2.imdecode(np.fromstring(input_.read(), np.uint8), 1)\n input = torch.Tensor(image).permute(2, 0, 1)\n input = input.to(self.device)\n input = input.half()\n inputs.append({\"image\": input})\n\n return inputs\n\n def inference(self,inputs):\n predictions = self.model(**inputs)\n return predictions", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "def postprocess(self, output):\n responses = []\n for inference_output in inference_outputs:\n responses_json = {\n 'classes': inference_output['pred_classes'].tolist(),\n 'scores': inference_output['scores'].tolist(),\n \"boxes\": inference_output['pred_boxes'].tolist()\n }\n responses.append(json.dumps(responses_json))\n\n return responses\n```", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "Metrics\n\nAn essential component in serving models in production is the ability to monitor them. **Torchserve** **collects** **system level** [metrics](https://github.com/pytorch/serve/blob/master/docs/metrics.md) regularly and **allows** adding **custom metrics** as well.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "**[System level metrics](https://github.com/pytorch/serve/blob/master/docs/metrics.md#system-metrics)** consist of CPU utilization, available and used disk space and memory on the host machine along with number of requests with different response codes (e.g 200-300, 400-500 and above 500). **Custom metrics** can be **added** to the metrics as explained [here](https://github.com/pytorch/serve/blob/master/docs/metrics.md#custom-metrics-api). TorchServe logs these two sets of metrics to different log files.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "Metrics are collected by default at:", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "* System metrics - log_directory/ts_metrics.log\n* Custom metrics - log directory/model_metrics.log", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "As mentioned before, Torchserve also exposes [metric API](https://github.com/pytorch/serve/blob/master/docs/metrics_api.md), that by default listens to port 8082 and enables users to query and monitor the collected metrics. The default metrics endpoint returns Prometheus formatted metrics. You can query metrics using curl requests or point a [Prometheus Server](https://github.com/pytorch/serve/blob/master/docs/metrics_api.md#prometheus-server) to the endpoint and use", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "[Grafana](https://github.com/pytorch/serve/blob/master/docs/metrics_api.md#grafana) for dashboards.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "While serving a model you can query metrics using curl request as follows:\n\n```\ncurl http://127.0.0.1:8082/metrics", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "In case you are looking into exporting the logged metrics, please refer to this [example](https://github.com/google/mtail) that uses mtail to export metrics to Prometheus. Tracking these metrics in a dashboard allows you to monitor performance regressions that may have been sporadic or hard to spot during an offline benchmark run.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "What to consider for tuning performance of a model in production\n\nThe workflow suggested in Fig 1, is the general idea on how to approach model deployment in production with Torchserve.\n\nIn many cases serving models in production is **optimized** **based** on **throughput** or **latency** service level agreement (**SLA)s**. Usually **real-time** **applications** are more concerned about **latency** whereas **off-line applications** may care more about higher **throughput**.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "In this post we discuss performance tuning of Torchserve for serving your models in production. One of the biggest challenges in the life cycle of a ML project is deploying models in production. This requires a reliable serving solution along with solutions that address the MLOps needs. A robust serving solution needs to provide support for multi model serving, model versioning, metric logging, monitoring and scaling to serve the peak traffic. In this post, we will have an overview of Torchserve and how to tune its performance for production use-cases. We discuss the [Animated Drawings app](https://ai.facebook.com/blog/using-ai-to-bring-childrens-drawings-to-life/) from Meta that can turn your human figure sketches to animations and how it could serve the peak traffic with Torchserve. The Animated Drawing\u2019s workflow is below.\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "[https://ai.facebook.com/blog/using-ai-to-bring-childrens-drawings-to-life/](https://ai.facebook.com/blog/using-ai-to-bring-childrens-drawings-to-life/)\n\nMany AI systems and tools are designed to handle realistic images of humans, children's drawings add a level of complexity and unpredictability as they are often constructed in abstract, fanciful ways. These types of morphological and stylistic variations can confuse even state-of-the-art AI systems that excel at spotting objects in photorealistic images and drawings.\nMeta AI researchers are working to overcome this challenge so that AI systems will be better able to recognize drawings of human figures in the wildly varied ways that children create them. This great blog post provides more details about the Animated Drawings and the approach taken.\n\n## Torchserve\n\n\n
\n
Fig1. Overall flow of Torchserve performance tuning \n", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "Once you have trained your model, it needs to be integrated into a larger system to have a full-fledged application, we use the term \u201cmodel serving\u201d to refer to this integration. Basically model serving is making your trained model available to run inferences and subsequent use of the model. \n\nTorchserve is the Pytorch preferred solution for serving models in production. It is a performant and scalable tool that wraps your model in a HTTP or HTTPS API. It has a frontend implemented in Java that handles multiple tasks from assigning workers for serving models to handling the connection between client and server. Torchserve has a Python backend that is responsible for handling the inference service.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "Torchserve supports multi model serving and versioning for AB test, dynamic batching, logging and metrics. It exposes four APIs for [inference](https://github.com/pytorch/serve/blob/master/docs/inference_api.md), [explanations](https://github.com/pytorch/serve/blob/master/docs/inference_api.md#explanations-api), [management](https://github.com/pytorch/serve/blob/master/docs/management_api.md) and [metrics](https://github.com/pytorch/serve/blob/master/docs/metrics_api.md). \n\n[Inference](https://github.com/pytorch/serve/blob/master/docs/inference_api.md) API is listening on port 8080 and accessible through localhost by default, this can be configured in [Torchserve configuration](https://github.com/pytorch/serve/blob/master/docs/configuration.md) and enable getting predictions from the model.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "[Explanation](https://github.com/pytorch/serve/blob/master/docs/inference_api.md#explanations-api) API uses Captum under the hood to provide explanations of the model that is being served and listens to the port 8080 as well.\n\n[Management](https://github.com/pytorch/serve/blob/master/docs/management_api.md#management-api) API allows to register or unregister and describe a model. It also enables users to scale up or down the number of workers that serve the model. \n\n[Metric](https://github.com/pytorch/serve/blob/master/docs/metrics_api.md) API by default listens to port 8082 and enables us to monitor the model that is being served.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "Torchserve let you scale your model serving and handle the peak traffic by supporting [batch inference](https://github.com/pytorch/serve/blob/master/docs/batch_inference_with_ts.md) and multiple workers that serve your model. Scaling can be done through [management](https://github.com/pytorch/serve/blob/master/docs/management_api.md) API and settings through a [configuration](https://github.com/pytorch/serve/blob/master/docs/configuration.md) file. Also, metric API helps you to monitor your model serving through default and customizable metrics.\n\nOther advanced settings such as the length of the queue for the received requests, maximum wait time for a batch of inputs and many other properties are configurable through a[ config file](https://github.com/pytorch/serve/blob/master/docs/configuration.md) that can be passed to Torchserve when it is started.\n\n\n**Steps to serve your model with Torchserve**", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "1. [Install Torchserve, model archiver](https://github.com/pytorch/serve/blob/master/docs/getting_started.md#install-torchserve-and-torch-model-archiver) and its requirements.\n2. Choose a default handler that fits your task (e.g image classification, etc) or author a [custom handler](https://github.com/pytorch/serve/blob/master/docs/custom_service.md#custom-handlers).\n3. [Package your model](https://github.com/pytorch/serve/tree/master/examples/Huggingface_Transformers#create-model-archive-eager-mode) artifacts (trained model checkpoint and all other necessary files for loading and running your model) and the handler into a \u201c.mar\u201d file using [Torcharchive](https://github.com/pytorch/serve/blob/master/model-archiver/README.md) and place it in the model store.\n4. [Start serving your model](https://github.com/pytorch/serve/blob/master/docs/getting_started.md).\n5. [Run inference](https://github.com/pytorch/serve/blob/master/docs/getting_started.md#get-predictions-from-a-model).\nWe will discuss model handlers and metrics in more detail here.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "## Model handlers\n\nTorchserve uses a handler in the backend to load the models, preprocess the received data, run inference and post-process the response. Handler in torchserve is a **python script** that all the model initialization, preprocessing, inference and post processing logic goes into.\n\nTorchserve provides an out of the box handler for a number of applications like image classification, segmentation, object detection and text classification. It also supports custom handlers, in case your use case is not supported in default handlers. \n\nIt provides a great flexibility in custom handlers, this potentially make Torchserve as **multi-framework** serving tool. Custom handlers let you define your custom logic to initialize a model that can be used also to load models from other frameworks such as ONNX.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "Torchserve **handler** is made of four main **functions**, **initialize**, **preprocess**, **inference** and **postprocess** that each return a list. The code snippet below shows an example of a custom handler.**Custom handlers inherit** from **BaseHandler** in Torchserve and can **overwrite** any of the **main** **functions**. Here is an example of the handler used for loading the [Detectron2](https://github.com/facebookresearch/detectron2) model for figure detection, this model has been exported to Torchscript and uses model.half() to run the inference with FP16, details are explained in another [section]() in this post.\n\n```python\n\nclass MyModelHandler(BaseHandler):\n def initialize(self, context):\n self.manifest = ctx.manifest\n properties = ctx.system_properties\n model_dir = properties.get(\"model_dir\")\n serialized_file = self.manifest[\"model\"][\"serializedFile\"]\n model_pt_path = os.path.join(model_dir, serialized_file)", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "self.device = torch.device(\n \"cuda:\" + str(properties.get(\"gpu_id\"))\n if torch.cuda.is_available() and properties.get(\"gpu_id\") is not None\n else \"cpu\"\n )\n self.model = torch.jit.load(model_pt_path, map_location=self.device)\n\n self.model = self.model.half()\n\n def preprocess(self, data):\n\n inputs = []\n for request in batch:\n\n request_body = request.get(\"body\")\n\n input_ = io.BytesIO(request_body)\n image = cv2.imdecode(np.fromstring(input_.read(), np.uint8), 1)\n input = torch.Tensor(image).permute(2, 0, 1)\n input = input.to(self.device)\n input = input.half()\n inputs.append({\"image\": input})\n\n return inputs\n\n def inference(self,inputs):\n predictions = self.model(**inputs)\n return predictions", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "def postprocess(self, output):\n responses = []\n for inference_output in inference_outputs:\n responses_json = {\n 'classes': inference_output['pred_classes'].tolist(),\n 'scores': inference_output['scores'].tolist(),\n \"boxes\": inference_output['pred_boxes'].tolist()\n }\n responses.append(json.dumps(responses_json))\n\n return responses\n```\n\n## Metrics\n\nAn essential component in serving models in production is the ability to monitor them. **Torchserve** **collects** **system level** [metrics](https://github.com/pytorch/serve/blob/master/docs/metrics.md) regularly and **allows** adding **custom metrics** as well.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "**[System level metrics](https://github.com/pytorch/serve/blob/master/docs/metrics.md#system-metrics)** consist of CPU utilization, available and used disk space and memory on the host machine along with number of requests with different response codes (e.g 200-300, 400-500 and above 500). **Custom metrics** can be **added** to the metrics as explained [here](https://github.com/pytorch/serve/blob/master/docs/metrics.md#custom-metrics-api). TorchServe logs these two sets of metrics to different log files. Metrics are collected by default at:\n\n* System metrics - log_directory/ts_metrics.log\n* Custom metrics - log directory/model_metrics.log", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "As mentioned before, Torchserve also exposes [metric API](https://github.com/pytorch/serve/blob/master/docs/metrics_api.md), that by default listens to port 8082 and enables users to query and monitor the collected metrics. The default metrics endpoint returns Prometheus formatted metrics. You can query metrics using curl requests or point a [Prometheus Server](https://github.com/pytorch/serve/blob/master/docs/metrics_api.md#prometheus-server) to the endpoint and use [Grafana](https://github.com/pytorch/serve/blob/master/docs/metrics_api.md#grafana) for dashboards. \n\nWhile serving a model you can query metrics using curl request as follows:\n\n```\ncurl http://127.0.0.1:8082/metrics\n```", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "In case you are looking into exporting the logged metrics, please refer to this [example](https://github.com/google/mtail) that uses mtail to export metrics to Prometheus. Tracking these metrics in a dashboard allows you to monitor performance regressions that may have been sporadic or hard to spot during an offline benchmark run.\n\n## What to consider for tuning performance of a model in production\n\nThe workflow suggested in Fig 1, is the general idea on how to approach model deployment in production with Torchserve.\n\nIn many cases serving models in production is **optimized** **based** on **throughput** or **latency** service level agreement (**SLA)s**. Usually **real-time** **applications** are more concerned about **latency** whereas **off-line applications** may care more about higher **throughput**.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
{"page_content": "There are a number of main factors contributing to the performance of a serving model in production. In particular, we are focusing on serving Pytorch models with Torchserve here, however most of these factors generalize to all models from other frameworks as well.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "* **Model optimizations**: this is a pre-step for deploying models into production. This is a very broad discussion that we will get into in a series of future blogs. This includes techniques like quantization, pruning to decrease the size of the model, using Intermediate representations (IR graphs) such as Torchscript in Pytorch, fusing kernels and many others. Currently [torchprep](https://github.com/msaroufim/torchprep) provides many of these techniques as a CLI tool.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "* **Batch inference:** it refers to feeding multiple inputs into a model, while it is essential during training, it can be very helpful to manage the cost at inference time as well. Hardware accelerators are optimized for parallelism and batching helps to saturate the compute capacity and often leads to higher throughput. The main difference in inference is you can\u2019t wait too long to get a batch filled from clients, something we call dynamic batching", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "* **Number of Workers :** Torchserve uses workers to serve models. Torchserve workers are Python processes that hold a copy of the model weights for running inference. Too few workers means you\u2019re not benefitting from enough parallelism but too many can cause worker contention and degrade end to end performance.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "- **Hardware :** choosing the appropriate hardware based on the model, application and latency, throughput budget. This could be one of the **supported** hardwares in Torchserve, **CPU, GPU, AWS Inferentia**. Some hardware configurations are intended for best in class performance and others are better suited for cost effective inference. From our experiments we\u2019ve found that GPUs shine best at larger batch sizes whereas the right CPUs and AWS Inferentia can be far more cost effective for lower batch sizes", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "and low latency.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "Best Practices for Performance tuning on Torchserve\n\nTo get the best performance out of your model while serving it with Torchserve, we are sharing some of the best practices here. Torchserve provides a [benchmark](https://github.com/pytorch/serve/tree/c87bfec8916d340de5de5810b14a016049b0e395/benchmarks#benchmarking-with-apache-bench) suite that provides helpful insight to make informed decisions on different choices as detailed below.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "* **Optimize your model** as the first step, Pytorch model optimization [tutorials](https://pytorch.org/tutorials/). **Model optimization** choices are also closely **tied** to the **hardware** of choice. We will discuss it in more detail in another blog post.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "* **Deciding** the **hardware** for model deployment can be closely related to the latency and throughput budget and cost per inference. Depending on the size of model and application it can vary, for some models like computer vision models it has been historically not affordable to run in production on CPU. However, by having optimizations such [IPEX](https://github.com/pytorch/serve/blob/c87bfec8916d340de5de5810b14a016049b0e395/examples/intel_extension_for_pytorch/README.md) as recently added to", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "Torchserve this has been much more affordable and cost beneficial and you can learn more in this investigative [case study](https://pytorch.org/tutorials/intermediate/torchserve_with_ipex.html)", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "* **Workers** in Torchserve are Python processes that provide parallelism, setting the number of workers should be done carefully. By default Torchserve launch number of workers equal to VCPUs or available GPUs on the host, this can add a considerable amount of time to the Torchserve start.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "Torchserve exposes a [config property](https://github.com/pytorch/serve/blob/c87bfec8916d340de5de5810b14a016049b0e395/docs/configuration.md#config-model) to set the number of workers. To provide an **efficient parallelism** through **multiple workers** and avoiding them to compete over resources, as a baseline we **recommend** following setting on CPU and GPU:", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "**CPU** : In the handler, `torch.set_num_threads(1) `then set the number of workers to `num physical cores / 2. `But the the best threading configurations can be achieved by leveraging the Intel CPU launcher script.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "* **Model optimizations**: this is a pre-step for deploying models into production. This is a very broad discussion that we will get into in a series of future blogs. This includes techniques like quantization, pruning to decrease the size of the model, using Intermediate representations (IR graphs) such as Torchscript in Pytorch, fusing kernels and many others. Currently [torchprep](https://github.com/msaroufim/torchprep) provides many of these techniques as a CLI tool. \n* **Batch inference:** it refers to feeding multiple inputs into a model, while it is essential during training, it can be very helpful to manage the cost at inference time as well. Hardware accelerators are optimized for parallelism and batching helps to saturate the compute capacity and often leads to higher throughput. The main difference in inference is you can\u2019t wait too long to get a batch filled from clients, something we call dynamic batching\n* **Number of Workers :** Torchserve uses workers to serve models. Torchserve workers are Python processes that hold a copy of the model weights for running inference. Too few workers means you\u2019re not benefitting from enough parallelism but too many can cause worker contention and degrade end to end performance.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "- **Hardware :** choosing the appropriate hardware based on the model, application and latency, throughput budget. This could be one of the **supported** hardwares in Torchserve, **CPU, GPU, AWS Inferentia**. Some hardware configurations are intended for best in class performance and others are better suited for cost effective inference. From our experiments we\u2019ve found that GPUs shine best at larger batch sizes whereas the right CPUs and AWS Inferentia can be far more cost effective for lower batch sizes and low latency.\n\n## Best Practices for Performance tuning on Torchserve\n\nTo get the best performance out of your model while serving it with Torchserve, we are sharing some of the best practices here. Torchserve provides a [benchmark](https://github.com/pytorch/serve/tree/c87bfec8916d340de5de5810b14a016049b0e395/benchmarks#benchmarking-with-apache-bench) suite that provides helpful insight to make informed decisions on different choices as detailed below.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "* **Optimize your model** as the first step, Pytorch model optimization [tutorials](https://pytorch.org/tutorials/). **Model optimization** choices are also closely **tied** to the **hardware** of choice. We will discuss it in more detail in another blog post.\n* **Deciding** the **hardware** for model deployment can be closely related to the latency and throughput budget and cost per inference. Depending on the size of model and application it can vary, for some models like computer vision models it has been historically not affordable to run in production on CPU. However, by having optimizations such [IPEX](https://github.com/pytorch/serve/blob/c87bfec8916d340de5de5810b14a016049b0e395/examples/intel_extension_for_pytorch/README.md) as recently added to Torchserve this has been much more affordable and cost beneficial and you can learn more in this investigative [case study](https://pytorch.org/tutorials/intermediate/torchserve_with_ipex.html) \n* **Workers** in Torchserve are Python processes that provide parallelism, setting the number of workers should be done carefully. By default Torchserve launch number of workers equal to VCPUs or available GPUs on the host, this can add a considerable amount of time to the Torchserve start.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "Torchserve exposes a [config property](https://github.com/pytorch/serve/blob/c87bfec8916d340de5de5810b14a016049b0e395/docs/configuration.md#config-model) to set the number of workers. To provide an **efficient parallelism** through **multiple workers** and avoiding them to compete over resources, as a baseline we **recommend** following setting on CPU and GPU:\n\n\n **CPU** : In the handler, `torch.set_num_threads(1) `then set the number of workers to `num physical cores / 2. `But the the best threading configurations can be achieved by leveraging the Intel CPU launcher script.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
{"page_content": "**GPU**: number of available GPUs can be set through[ number_gpus](https://github.com/pytorch/serve/blob/c87bfec8916d340de5de5810b14a016049b0e395/docs/configuration.md#limit-gpu-usage) in config.properties. Torchserve uses round robin to assign workers to GPUs. We recommend setting the number of workers as follows. `Number of worker = (Number of available GPUs) / (Number of Unique Models). `Note that GPUs that are pre-Ampere do not provide any resource isolation with Multi Instance GPUs.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "* **Batch size** can directly affect the latency and the throughput. To better utilize the compute resources batch size needs to be increased. However, there is a tradeoff between latency and throughput. **Larger batch sizes** can **increase** the **throughput but results in a higher latency** as well. Batch size can be set in Torchserve in two ways, either through[ model config](https://github.com/pytorch/serve/blob/c87bfec8916d340de5de5810b14a016049b0e395/docs/configuration.md#config-model) in", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "config.properties or while registering the model using [Management API](https://github.com/pytorch/serve/blob/c87bfec8916d340de5de5810b14a016049b0e395/docs/management_api.md#scale-workers).", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "In the next section, we are going to use Torchserve benchmark suite to decide the best combination of model optimization, hardware, workers, and batch size.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "Animated Drawings Performance Tuning \n\nTo use the Torchserve benchmark suite, first we need to have an archived file, \u201c.mar\u201d file as discussed above, that contains the model, handler and all other artifacts to load and run inference. Animated Drawings uses Detectron2\u2019s implementation of Mask-RCNN for an object detection model.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "How to run benchmark suite \n\nThe [Automated benchmark suite](https://github.com/pytorch/serve/tree/master/benchmarks#auto-benchmarking-with-apache-bench) in Torchserve let you benchmark multiple models with different setting including batch size and number of worker and finally generate a report for you. To get started:\n\n```\ngit clone https://github.com/pytorch/serve.git\n\ncd serve/benchmarks\n\npip install -r requirements-ab.txt\n\napt-get install apache2-utils", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "Model level settings can be configured in a yaml file similar to \n\n```yaml", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "Model_name:\n eager_mode:\n benchmark_engine: \"ab\"\n url: \"Path to .mar file\"\n workers:\n - 1\n - 4\n batch_delay: 100\n batch_size:\n - 1\n - 2\n - 4\n - 8\n requests: 10000\n concurrency: 10\n input: \"Path to model input\"\n backend_profiling: False\n exec_env: \"local\"\n processors:\n - \"cpu\"\n - \"gpus\": \"all\"", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "This yaml file will be referenced in the [benchmark_config_template](https://github.com/pytorch/serve/blob/master/benchmarks/benchmark_config_template.yaml#L12).yaml file that includes other settings for generating reports, this can optionally work with AWS cloud watch for logs as well.\n\n```\npython benchmarks/auto_benchmark.py --input benchmark_config_template.yaml", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "Running the **benchmarks**, results will be written in \u201ccsv\u201d file that can be found in \u201c_ /tmp/benchmark/ab_report.csv_\u201d and full report \u201c/tmp/ts_benchmark/report.md\". It will include items such as Torchserve average latency, model P99 latency, throughput, number of concurrency, number of requests, handler time, and some other metrics. Here we focus on some of the important ones that we track to tune the performance which are, **concurrency**, **model P99** latency, **throughput**. We look at these numbers", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "specifically in **combination** with **batch size**, the used **device, number of workers** and if any **model optimization** has been done.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "The **latency SLA** for this model has been set to **100 ms,** this is real-time application and as we discussed earlier, latency is more of a concern and **throughput** ideally should be as high as possible while it does **not violate** the **latency SLA.**\n\nThrough searching the space, over different batch sizes (1-32), number of workers (1-16) and devices (CPU,GPU), we have run a set of experiments that summarized the best ones in the table below.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "\n \n Device \n | \n Concurrency \n | \n # Requests\n | \n #workers\n | \n Batch size\n | \n Payload/image\n | \n Optimization \n | \n Throughput \n | \n Latency P99\n | \n
\n \n CPU\n | \n 10\n | \n 1000\n | \n 1\n | \n 1\n | \n small\n | \n N/A\n | \n 3.45\n | \n 305.3 ms\n | \n
\n \n CPU\n | \n 1", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": " | \n 1000\n | \n 1\n | \n 1\n | \n small\n | \n N/A\n | \n 3.45\n | \n 291.8 ms\n | \n
\n \n GPU\n | \n 10\n | \n 1000\n | \n 1\n | \n 1\n | \n small\n | \n N/A\n | \n 41.05\n | \n 25.48 ms\n | \n
\n \n GPU\n | \n 1\n | \n 1000\n | \n 1\n | \n 1\n | \n small\n | \n N/A\n | ", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "42.21\n | \n 23.6 ms\n | \n
\n \n GPU\n | \n 10\n | \n 1000\n | \n 1\n | \n 4\n | \n small\n | \n N/A\n | \n 54.78\n | \n 73.62 ms\n | \n
\n \n GPU\n | \n 10\n | \n 1000\n | \n 1\n | \n 4\n | \n small\n | \n model.half()\n | \n 78.62\n | \n 50.69 ms\n | \n
\n \n GPU\n | \n 10\n | ", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "1000\n | \n 1\n | \n 8\n | \n small\n | \n model.half()\n | \n 85.29\n | \n 94.4 ms\n | \n
\n
", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "The latency of this model on CPU with all of the tried settings in terms of batch size, concurrency and number of workers did not meet the SLA, in fact ~13x higher.\n\n**Moving** the model serving **to GPU**, immediately could **improve** the **latency** ~**13x **from 305 ms down to 23.6 ms.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "One of the **simplest** **optimizations** that we could do for the model was lowering its precision to **fp16**, it is one liner (**model.half()**) and could reduce the **model P99 latency **by **32%** and increase the throughput by almost the same amount.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "There could be other optimization done by Torchscripting the model and using [optimize_for_inference](https://github.com/pytorch/pytorch/blob/master/torch/jit/_freeze.py#L168) or other tricks including onnx or tensorrt runtime optimizations which leverage aggressive fusions are out of the scope of this post. We will discuss model optimizations in a separate post.\n\nWe found both on CPU and GPU , setting **number of workers=1 **worked the best in this case.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "* Moving the model to GPU, using **number of workers = 1**, and **batch size = 1** increased the **Throughput ~12x compared** to **CPU and latency ~13x.**\n* Moving the model to GPU, using **model.half()**, **number of workers = 1**, and **batch size = 8** yielded **best** results in terms of **Throughput** and tolerable latency. **Throughput** increased **~25x compared** to **CPU with latency still meeting the SLA (94.4ms).**", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "_Note: if you are running the benchmark suite, make sure you are setting a proper `batch_delay` and set the concurrency of the request to a number proportional to your batch size. Concurrency here means the number of concurrent requests being sent to the server._", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "Conclusion\n\nIn this post, we have discussed the considerations and knobs that Torchserve expose to tune the performance in production. We have discussed the Torchserve benchmark suite as a means to tune the performance and get insights on possible choices for model optimizations, hardware choice and cost in general. We used Animated Drawings app which uses Detectron2\u2019s Mask-RCNN model as a case-study to showcase the performance tuning with benchmark suite.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "For more details on Performance tuning in Torchserve please refer to our documentation [here](https://github.com/pytorch/serve/blob/master/docs/performance_guide.md).\nAlso feel free to open a ticket on [Torchserve repo](https://github.com/pytorch/serve/issues) for any further questions and feedback.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "Acknowledgement\n\nWe would like to thank Somya Jain (Meta), Christopher Gustave (Meta) for their great support and guidance throughout many steps of this blog and providing insights to Sketch Animator workflow. Also, special thanks to[ Li Ning](https://www.linkedin.com/in/li-ning-7274604/) from AWS for the great efforts to make performance tuning much easier on Torchserve with automated benchmark suite.\n\n\n", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"Scaling Vision Model Training Platforms with PyTorch\"\nauthor: Vaibhav Aggarwal, Mannat Singh, Anjali Sridhar, Yanghao Li, Shoubhik Debnath, Ronghang Hu, Will Feng, Xinlei Chen, Tingting Markstrum, Diana Liskovich, Anupam Bhatnagar, Chay Ryali, Haoqi Fan, Tete Xiao, Min Xu, Rahul Iyer, Christoph Feichtenhofer, Ross Girshick, Piotr Dollar, Aaron Adcock, Wan-Yen Lo, CK Luk\nfeatured-img: \"/assets/images/scaling-vision-figure_1-solutions-to-the-challenges.png\"\n---", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "*TL;DR: We demonstrate the use of PyTorch with FairScale\u2019s FullyShardedDataParallel (FSDP) API in writing large vision transformer models. We discuss our techniques for scaling and optimizing these models on a GPU cluster. The goal of this platform scaling effort is to enable research at scale. This blog does not discuss model accuracy, new model architectures, or new training recipes.*", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "1. Introduction\n\nLatest vision research [1, 2] demonstrates model scaling as a promising research direction. In this project, we aim to enable our platforms to train massive vision transformer (ViT) [3] models. We present our work on scaling the largest trainable ViT from 1B to 120B parameters in FAIR vision platforms. We wrote ViT in PyTorch and leveraged its support for large-scale, distributed training on a GPU cluster.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "In the rest of this blog, we will first discuss the main challenges, namely *scalability*, *optimization*, and *numerical stability*. Then we will discuss how we tackle them with techniques including *data and model parallelism*, *automatic mixed precision*, *kernel fusion*, and *bfloat16*. Finally, we present our results and conclude.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "2. Main Challenges", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "2.1 Scalability\n\nThe key scalability challenge is to efficiently shard a model\u2019s operations and state across multiple GPUs. A 100B parameter model requires ~200GB of RAM just for parameters, assuming fp16 representation. So, it is impossible to fit the model on a single GPU (A100 has at most 80GB RAM). Therefore, we need some way to efficiently shard a model\u2019s data (input, parameters, activations, and optimizer state) across multiple GPUs.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Another aspect of this problem is to scale without significantly changing the training recipe. E.g. Certain representation learning recipes use a global batch size of up to 4096 beyond which we start to see accuracy degradation. We cannot scale to more than 4096 GPUs without using some form of tensor or pipeline parallelism.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "2.2 Optimization", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "The key optimization challenge is to maintain high GPU utilization even as we scale the number of model parameters and flops. When we scale models to teraflops and beyond, we start to hit major bottlenecks in our software stack that super-linearly increase training time and reduce accelerator utilization. We require hundreds or thousands of GPUs to run just a single experiment. Improvements in accelerator utilization can lead to significant reductions in cost and improve fleet utilization. It enables us to", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "fund more projects and run more experiments in parallel.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "2.3 Numerical Stability", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "The key stability challenge is to avoid numerical instability and divergence at large scale. We empirically observed in our experiments that the training instability gets severe and hard to deal with when we scale up model sizes, data, batch sizes, learning rate, etc. Vision Transformers particularly face training instability even at a lower parameter threshold. E.g., we find it challenging to train even ViT-H (with just 630M parameters) in mixed-precision mode without using strong data augmentation. We", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "need to study the model properties and training recipes to make sure that the models train stably and converge.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "3. Our Solutions\n\n**Figure 1** depicts our solutions to each of the challenges.\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "3.1 Addressing scaling challenges with data parallelism and model parallelism\n\nWe apply various forms of data and model parallelism to enable fitting very large models in GPU memory.\n\nWe use FairScale\u2019s *FullyShardedDataParallel (FSDP)* API [4], based on PyTorch, to shard parameters, gradients, and optimizer state across multiple GPUs, thereby reducing the memory footprint per GPU. This process consists of the following three steps:", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "- Step 1: We wrapped the entire model in a single FSDP instance. This shards the model parameters at the end of a forward pass and gathers parameters at the beginning of a forward pass. This enabled us to scale ~3x from 1.5B to 4.5B parameters.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "- Step 2: We experimented with wrapping individual model layers in separate FSDP instances. This nested wrapping further reduced the memory footprint by sharding and gathering parameters of individual model layers instead of an entire model. The peak memory is then determined by an individually wrapped transformer block in GPU memory in this mode instead of the entire model.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "- Step 3: We used *activation-checkpoint* to reduce the memory consumption by activations. It saves the input tensors and discards the intermediate activation tensors during the forward pass. These are recomputed during the backward pass.\n\nIn addition, we experimented with model-parallelism techniques such as pipeline parallelism [5], which allow us to scale to more GPUs without increasing the batch size.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "3.2 Addressing optimization challenges with advanced AMP and kernel fusion", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Advanced AMP\n\nAutomatic Mixed Precision (AMP) [6] training refers to training models using a lower precision of bits than FP32 or the default but still maintaining accuracy. We experimented with three levels of AMP as described below:\n\n- AMP O1: This refers to training in mixed precision where weights are in FP32 and some operations are in FP16. With AMP O1, the ops that might impact accuracy remain in FP32 and are not autocasted to FP16.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "- AMP O2: This refers to training in mixed precision but with more weights and ops in FP16 than in O1. Weights do not implicitly remain in FP32 and are cast to FP16. A copy of the master weights is maintained in the FP32 precision that is used by the optimizer. If we want the normalization layer weights in FP32 then we need to explicitly use layer wrapping to ensure that.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "- Full FP16: This refers to training in full FP16 where weights and operations are in FP16. FP16 is challenging to enable for training due to convergence issues.\n\nWe found that AMP O2 with LayerNorm wrapping in FP32 leads to the best performance without sacrificing accuracy.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Kernel Fusion\n\n- To reduce GPU kernel launch overhead and increase GPU work granularity, we experimented with kernel fusions, including fused dropout and fused layer-norm, using the [xformers library](https://github.com/facebookresearch/xformers) [7].", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "3.3 Addressing stability challenges by studying ops numerical stability and training recipes", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "BFloat16 in general but with LayerNorm in FP32\n\nThe [bfloat16](https://cloud.google.com/tpu/docs/bfloat16) (BF16) [8] floating-point format provides the same dynamic range as FP32 with a memory footprint identical to FP16. We found that we could train models in the BF16 format using the same set of hyperparameters as in FP32, without special parameter tuning. Nevertheless, we found that we need to keep LayerNorm in FP32 mode in order for the training to converge.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "3.4 Final training recipe\n\nA summary of the final training recipe.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "1. Wrap the outer model in an FSDP instance. Enable parameter sharding after the forward pass.\n2. Wrap individual ViT blocks with activation checkpointing, nested FSDP wrapping, and parameter flattening.\n3. Enable mixed precision mode (AMP O2) with bfloat16 representation. Maintain the optimizer state in FP32 precision to enhance numerical stability.\n4. Wrap normalization layers like LayerNorm in FP32 for better numerical stability.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "5. Maximize the Nvidia TensorCore utilization by keeping matrix dimensions to be multiple of 8. For More details check [Nvidia Tensor Core Performance Guide](https://developer.download.nvidia.com/video/gputechconf/gtc/2019/presentation/s9926-tensor-core-performance-the-ultimate-guide.pdf).", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "4. Results", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "In this section, we show the scaling results of ViT on three types of tasks: (1) image classification, (2) object detection (3) video understanding. **Our key result is that we are able to train massive ViT backbones across these vision tasks after applying the discussed scaling and optimization techniques. This enables vision research at a much larger scale.** We trained the models to convergence to verify that we maintain the current baselines even with all the optimizations. A common trend in Figures 2,", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "3, 4 is that we are able to train up to 25B-param models with an epoch time of less than 4 hours on 128 A100 GPUs. The 60B and 120B models are relatively slower to train.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "**Figure 2** shows the *image-classification* scaling result. It plots the epoch time for training ViTs on ImageNet using 128 A100-80GB GPUs with different model sizes.\n\n\n
\n
\n\n\nFigure 2: Image-classification scaling result.\n
", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "**Figure 3** shows the *object-detection* scaling result. It plots the epoch time for training [ViTDet](https://arxiv.org/abs/2203.16527) [9] with different ViT backbones on COCO using 128 A100-80GB GPUs.\n\n\n
\n
\n\n\nFigure 3: Object-detection scaling result.\n
", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "**Figure 4** shows the *video-understanding* scaling result. It plots the epoch time for training [MViTv2](https://arxiv.org/abs/2112.01526) [10] models on [Kinetics 400](https://www.deepmind.com/open-source/kinetics) [11] using 128 V100 (32 GB) GPUs in FP32.\n\n\n
\n
\n\n\nFigure 4: Video-understanding scaling result.\n
", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "**Figure 5** shows the optimization result with the ViT-H model in Figure 2 on 8 A100-40GB GPUs.\nThree versions are used: (1) the baseline uses PyTorch\u2019s DDP [12] with AMP O1, (2) FSDP + AMP-O2 + other optimizations, and (3) FSDP + FP16 + other optimizations. These optimizations altogether speed up the training by up to 2.2x.\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "\nFigure 5: Training speedups from various optimizations.\n
", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "5. Concluding Remarks\n\nWe have demonstrated the use of PyTorch with FairScale\u2019s FullyShardedDataParallel (FSDP) API in writing large vision transformer models. We discuss our techniques for scaling and optimizing these models on a GPU cluster. We hope that this article can motivate others to develop large-scale ML models with PyTorch and its ecosystem.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "References\n\n[1] [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377)\n\n[2] [Revisiting Weakly Supervised Pre-Training of Visual Perception Models](https://arxiv.org/abs/2201.08371)\n\n[3] [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929v2)\n\n[4] [fairscale.nn.FullyShardedDataParallel](https://fairscale.readthedocs.io/en/stable/api/nn/fsdp.html)", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "[5] [Pipeline parallelism in PyTorch](https://pytorch.org/docs/stable/pipeline.html)\n\n[6] [Automatic Mixed Precision (AMP) in PyTorch](https://pytorch.org/docs/stable/amp.html#module-torch.amp)\n\n[7] [xformers](https://github.com/facebookresearch/xformers)\n\n[8] [The bfloat16 numerical format](https://cloud.google.com/tpu/docs/bfloat16)\n\n[9] [Exploring Plain Vision Transformer Backbones for Object Detection](https://arxiv.org/abs/2203.16527)", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "[10] [MViTv2: Improved Multiscale Vision Transformers for Classification and Detection](https://arxiv.org/abs/2112.01526)\n\n[11] [https://www.deepmind.com/open-source/kinetics](https://www.deepmind.com/open-source/kinetics)\n\n[12] [Getting Started with Distributed Data Parallel (DDP)](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html)", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'Announcing PyTorch Ecosystem Day'\nauthor: Team PyTorch\n---\n\nWe\u2019re proud to announce our first PyTorch Ecosystem Day. The virtual, one-day event will focus completely on our Ecosystem and Industry PyTorch communities!\n\n\nPyTorch is a deep learning framework of choice for academics and companies, all thanks to its rich ecosystem of tools and strong community. As with our developers, our ecosystem partners play a pivotal role in the development and growth of the community.", "metadata": {"source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
", "metadata": {"source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"}}
-{"page_content": "We will be hosting our first PyTorch Ecosystem Day, a virtual event designed for our ecosystem and industry communities to showcase their work and discover new opportunities to collaborate.", "metadata": {"source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch Ecosystem Day will be held on April 21, with both a morning and evening session, to ensure we reach our global community. Join us virtually for a day filled with discussions on new developments, trends, challenges, and best practices through keynotes, breakout sessions, and a unique networking opportunity hosted through Gather.Town .", "metadata": {"source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"}}
-{"page_content": "Event Details\nApril 21, 2021 (Pacific Time)\nFully digital experience \n \n* Morning Session: (EMEA)\nOpening Talks - 8:00 am-9:00 am PT\nPoster Exhibition & Breakout Sessions - 9:00 am-12:00 pm PT \n\n* Evening Session (APAC/US)\nOpening Talks - 3:00 pm-4:00 pm PT\nPoster Exhibition & Breakout Sessions - 3:00 pm-6:00 pm PT \n\n* Networking - 9:00 am-7:00 pm PT", "metadata": {"source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"}}
-{"page_content": "There are two ways to participate in PyTorch Ecosystem Day:\n \n1. **Poster Exhibition** from the PyTorch ecosystem and industry communities covering a variety of topics. Posters are available for viewing throughout the duration of the event. To be part of the poster exhibition, please see below for submission details. If your poster is accepted, we highly recommend tending your poster during one of the morning or evening sessions or both!", "metadata": {"source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"}}
-{"page_content": "2. **Breakout Sessions** are 40-min sessions freely designed by the community. The breakouts can be talks, demos, tutorials or discussions. Note: you must have an accepted poster to apply for the breakout sessions.", "metadata": {"source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"}}
-{"page_content": "Call for posters now open! [Submit your proposal](https://pytorchecosystemday.fbreg.com/posters) today! Please send us the **title** and **summary** of your projects, tools, and libraries that could benefit PyTorch researchers in academia and industry, application developers, and ML engineers for consideration. The focus must be on academic papers, machine learning research, or open-source projects. Please no sales pitches. **Deadline for submission is March 18, 2021.**", "metadata": {"source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"}}
-{"page_content": "Visit [pytorchecosystemday.fbreg.com](http://pytorchecosystemday.fbreg.com) for more information and we look forward to welcoming you to PyTorch Ecosystem Day on April 21st!", "metadata": {"source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch 1.5 released, new and updated APIs including C++ frontend API parity with Python'\nauthor: Team PyTorch\n---", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
-{"page_content": "Today, we\u2019re announcing the availability of PyTorch 1.5, along with new and updated libraries. This release includes several major new API additions and improvements. PyTorch now includes a significant update to the C++ frontend, \u2018channels last\u2019 memory format for computer vision models, and a stable release of the distributed RPC framework used for model-parallel training. The release also has new APIs for autograd for hessians and jacobians, and an API that allows the creation of Custom C++ Classes that", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
-{"page_content": "was inspired by pybind.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
-{"page_content": "You can find the detailed release notes [here](https://github.com/pytorch/pytorch/releases).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
-{"page_content": "C++ Frontend API (Stable)\n\nThe C++ frontend API is now at parity with Python, and the features overall have been moved to \u2018stable\u2019 (previously tagged as experimental). Some of the major highlights include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
-{"page_content": "* Now with ~100% coverage and docs for C++ torch::nn module/functional, users can easily translate their model from Python API to C++ API, making the model authoring experience much smoother.\n* Optimizers in C++ had deviated from the Python equivalent: C++ optimizers can\u2019t take parameter groups as input while the Python ones can. Additionally, step function implementations were not exactly the same. With the 1.5 release, C++ optimizers will always behave the same as the Python equivalent.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
-{"page_content": "* The lack of tensor multi-dim indexing API in C++ is a well-known issue and had resulted in many posts in PyTorch Github issue tracker and forum. The previous workaround was to use a combination of `narrow` / `select` / `index_select` / `masked_select`, which was clunky and error-prone compared to the Python API\u2019s elegant `tensor[:, 0, ..., mask]` syntax. With the 1.5 release, users can use `tensor.index({Slice(), 0, \"...\", mask})` to achieve the same purpose.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
-{"page_content": "\u2018Channels last\u2019 memory format for Computer Vision models (Experimental)\n\n\u2018Channels last\u2019 memory layout unlocks ability to use performance efficient convolution algorithms and hardware (NVIDIA\u2019s Tensor Cores, FBGEMM, QNNPACK). Additionally, it is designed to automatically propagate through the operators, which allows easy switching between memory layouts.\n\nLearn more [here](https://github.com/pytorch/pytorch/wiki/Writing-memory-format-aware-operators) on how to write memory format aware operators.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
-{"page_content": "Custom C++ Classes (Experimental)\n\nThis release adds a new API, `torch::class_`, for binding custom C++ classes into TorchScript and Python simultaneously. This API is almost identical in syntax to [pybind11](https://pybind11.readthedocs.io/en/stable/). It allows users to expose their C++ class and its methods to the TorchScript type system and runtime system such that they can instantiate and manipulate arbitrary C++ objects from TorchScript and Python. An example C++ binding:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
-{"page_content": "```python\ntemplate \nstruct MyStackClass : torch::CustomClassHolder {\n std::vector stack_;\n MyStackClass(std::vector init) : stack_(std::move(init)) {}\n\n void push(T x) {\n stack_.push_back(x);\n }\n T pop() {\n auto val = stack_.back();\n stack_.pop_back();\n return val;\n }\n};", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
-{"page_content": "static auto testStack =\n torch::class_>(\"myclasses\", \"MyStackClass\")\n .def(torch::init>())\n .def(\"push\", &MyStackClass::push)\n .def(\"pop\", &MyStackClass::pop)\n .def(\"size\", [](const c10::intrusive_ptr& self) {\n return self->stack_.size();\n });", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
-{"page_content": "Which exposes a class you can use in Python and TorchScript like so:\n\n```python\n@torch.jit.script\ndef do_stacks(s : torch.classes.myclasses.MyStackClass):\n s2 = torch.classes.myclasses.MyStackClass([\"hi\", \"mom\"])\n print(s2.pop()) # \"mom\"\n s2.push(\"foobar\")\n return s2 # [\"hi\", \"foobar\"]\n```\n\nYou can try it out in the tutorial [here](https://pytorch.org/tutorials/advanced/torch_script_custom_classes.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
-{"page_content": "Distributed RPC framework APIs (Now Stable)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
-{"page_content": "The Distributed [RPC framework](https://pytorch.org/docs/stable/rpc.html) was launched as experimental in the 1.4 release and the proposal is to mark Distributed RPC framework as stable and no longer experimental. This work involves a lot of enhancements and bug fixes to make the distributed RPC framework more reliable and robust overall, as well as adding a couple of new features, including profiling support, using TorchScript functions in RPC, and several enhancements for ease of use. Below is an overview", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
-{"page_content": "of the various APIs within the framework:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
-{"page_content": "RPC API\nThe RPC API allows users to specify functions to run and objects to be instantiated on remote nodes. These functions are transparently recorded so that gradients can backpropagate through remote nodes using Distributed Autograd.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
-{"page_content": "Distributed Autograd", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
-{"page_content": "Distributed Autograd connects the autograd graph across several nodes and allows gradients to flow through during the backwards pass. Gradients are accumulated into a context (as opposed to the .grad field as with Autograd) and users must specify their model\u2019s forward pass under a with `dist_autograd.context()` manager in order to ensure that all RPC communication is recorded properly. Currently, only FAST mode is implemented (see", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
-{"page_content": "[here](https://pytorch.org/docs/stable/rpc/distributed_autograd.html#distributed-autograd-design) for the difference between FAST and SMART modes).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
-{"page_content": "Distributed Optimizer\nThe distributed optimizer creates RRefs to optimizers on each worker with parameters that require gradients, and then uses the RPC API to run the optimizer remotely. The user must collect all remote parameters and wrap them in an `RRef`, as this is required input to the distributed optimizer. The user must also specify the distributed autograd `context_id` so that the optimizer knows in which context to look for gradients.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
-{"page_content": "Learn more about distributed RPC framework APIs [here](https://pytorch.org/docs/stable/rpc.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
-{"page_content": "New High level autograd API (Experimental)\n\nPyTorch 1.5 brings new functions including jacobian, hessian, jvp, vjp, hvp and vhp to the `torch.autograd.functional` submodule. This feature builds on the current API and allows the user to easily perform these functions.\n\nDetailed design discussion on GitHub can be found [here](https://github.com/pytorch/pytorch/issues/30632).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
-{"page_content": "Python 2 no longer supported\n\nStarting PyTorch 1.5.0, we will no longer support Python 2, specifically version 2.7. Going forward support for Python will be limited to Python 3, specifically Python 3.5, 3.6, 3.7 and 3.8 (first enabled in PyTorch 1.4.0).\n\n\n*We\u2019d like to thank the entire PyTorch team and the community for all their contributions to this work.*\n\nCheers!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'OpenMined and PyTorch partner to launch fellowship funding for privacy-preserving ML community'\nauthor: Andrew Trask (OpenMined/U.Oxford), Shubho Sengupta, Laurens van der Maaten, Joe Spisak\nexcerpt: Many applications of machine learning (ML) pose a range of security and privacy challenges.\n---\n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "Many applications of machine learning (ML) pose a range of security and privacy challenges. In particular, users may not be willing or allowed to share their data, which prevents them from taking full advantage of ML platforms like PyTorch. To take the field of privacy-preserving ML (PPML) forward, OpenMined and PyTorch are announcing plans to jointly develop a combined platform to accelerate PPML research as well as new funding for fellowships.", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "There are many techniques attempting to solve the problem of privacy in ML, each at various levels of maturity. These include (1) homomorphic encryption, (2) secure multi-party computation, (3) trusted execution environments, (4) on-device computation, (5) federated learning with secure aggregation, and (6) differential privacy. Additionally, a number of open source projects implementing these techniques were created with the goal of enabling research at the intersection of privacy, security, and ML. Among", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "them, PySyft and CrypTen have taken an \u201cML-first\u201d approach by presenting an API that is familiar to the ML community, while masking the complexities of privacy and security protocols. We are excited to announce that these two projects are now collaborating closely to build a mature PPML ecosystem around PyTorch.", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "Additionally, to bolster this ecosystem and take the field of privacy preserving ML forward, we are also calling for contributions and supporting research efforts on this combined platform by providing funding to support the OpenMined community and the researchers that contribute, build proofs of concepts and desire to be on the cutting edge of how privacy-preserving technology is applied. We will provide funding through the [RAAIS Foundation](https://www.raais.org/), a non-profit organization with a", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "mission to advance education and research in artificial intelligence for the common good. We encourage interested parties to apply to one or more of the fellowships listed below.", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "Tools Powering the Future of Privacy-Preserving ML\n\nThe next generation of privacy-preserving open source tools enable ML researchers to easily experiment with ML models using secure computing techniques without needing to be cryptography experts. By integrating with PyTorch, PySyft and CrypTen offer familiar environments for ML developers to research and apply these techniques as part of their work.", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "**PySyft** is a Python library for secure and private ML developed by the OpenMined community. It is a flexible, easy-to-use library that makes secure computation techniques like [multi-party computation (MPC)](https://en.wikipedia.org/wiki/Secure_multi-party_computation) and privacy-preserving techniques like [differential privacy](https://en.wikipedia.org/wiki/Differential_privacy) accessible to the ML community. It prioritizes ease of use and focuses on integrating these techniques into end-user use", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "cases like federated learning with mobile phones and other edge devices, encrypted ML as a service, and privacy-preserving data science.", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "**CrypTen** is a framework built on PyTorch that enables private and secure ML for the PyTorch community. It is the first step along the journey towards a privacy-preserving mode in PyTorch that will make secure computing techniques accessible beyond cryptography researchers. It currently implements [secure multiparty computation](https://en.wikipedia.org/wiki/Secure_multi-party_computation) with the goal of offering other secure computing backends in the near future. Other benefits to ML researchers", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "include:", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "* It is **ML first** and presents secure computing techniques via a CrypTensor object that looks and feels exactly like a PyTorch Tensor. This allows the user to use automatic differentiation and neural network modules akin to those in PyTorch.\n* The framework focuses on **scalability and performance** and is built with real-world challenges in mind.", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "The focus areas for CrypTen and PySyft are naturally aligned and complement each other. The former focuses on building support for various secure and privacy preserving techniques on PyTorch through an encrypted tensor abstraction, while the latter focuses on end user use cases like deployment on edge devices and a user friendly data science platform.", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "Working together will enable PySyft to use CrypTen as a backend for encrypted tensors. This can lead to an increase in performance for PySyft and the adoption of CrypTen as a runtime by PySyft\u2019s userbase. In addition to this, PyTorch is also adding cryptography friendly features such as support for cryptographically secure random number generation. Over the long run, this allows each library to focus exclusively on its core competencies while enjoying the benefits of the synergistic relationship.", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "New Funding for OpenMined Contributors\n\nWe are especially excited to announce that the PyTorch team has invested $250,000 to support OpenMined in furthering the development and proliferation of privacy-preserving ML. This gift will be facilitated via the [RAAIS Foundation](https://www.raais.org/) and will be available immediately to support paid fellowship grants for the OpenMined community.", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "How to get involved\n\nThanks to the support from the PyTorch team, OpenMined is able to offer three different opportunities for you to participate in the project\u2019s development. Each of these fellowships furthers our shared mission to lower the barrier-to-entry for privacy-preserving ML and to create a more privacy-preserving world.", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "Core PySyft CrypTen Integration Fellowships", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "During these fellowships, we will integrate CrypTen as a supported backend for encrypted computation in PySyft. This will allow for the high-performance, secure multi-party computation capabilities of CrypTen to be used alongside other important tools in PySyft such as differential privacy and federated learning. For more information on the roadmap and how to apply for a paid fellowship, check out the project\u2019s [call for contributors](https://blog.openmined.org/openmined-pytorch-fellowship-crypten-project).", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "Federated Learning on Mobile, Web, and IoT Devices", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "During these fellowships, we will be extending PyTorch with the ability to perform federated learning across mobile, web, and IoT devices. To this end, a PyTorch front-end will be able to coordinate across federated learning backends that run in Javascript, Kotlin, Swift, and Python. Furthermore, we will also extend PySyft with the ability to coordinate these backends using peer-to-peer connections, providing low latency and the ability to run secure aggregation as a part of the protocol. For more", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "information on the roadmap and how to apply for a paid fellowship, check out the project\u2019s [call for contributors](https://blog.openmined.org/announcing-the-pytorch-openmined-federated-learning-fellowships).", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "Development Challenges", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "Over the coming months, we will issue regular open competitions for increasing the performance and security of the PySyft and PyGrid codebases. For performance-related challenges, contestants will compete (for a cash prize) to make a specific PySyft demo (such as federated learning) as fast as possible. For security-related challenges, contestants will compete to hack into a PyGrid server. The first to demonstrate their ability will win the cash bounty! For more information on the challenges and to sign up", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "to receive emails when each challenge is opened, [sign up here](http://blog.openmined.org/announcing-the-openmined-pytorch-development-challenges).", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "To apply, select one of the above projects and identify a role that matches your strengths!\n\nCheers,\n\nAndrew, Laurens, Joe, and Shubho", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'How Computational Graphs are Constructed in PyTorch'\nauthor: Preferred Networks\nfeatured-img: 'assets/images/augmented_computational_graph.png'\n---", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "In the previous [post](https://pytorch.org/blog/overview-of-pytorch-autograd-engine/) we went over the theoretical foundations of automatic differentiation and reviewed the implementation in PyTorch. In this post, we will be showing the parts of PyTorch involved in creating the graph and executing it. In order to understand the following contents, please read @ezyang\u2019s wonderful [blog post](http://blog.ezyang.com/2019/05/pytorch-internals/) about PyTorch internals.\n\n# Autograd components", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "First of all, let\u2019s look at where the different components of autograd live:", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "[tools/autograd](https://github.com/pytorch/pytorch/tree/release/1.9/tools/autograd): Here we can find the definition of the derivatives as we saw in the previous post [derivatives.yaml](https://github.com/pytorch/pytorch/blob/release/1.9/tools/autograd/derivatives.yaml), several python scripts and a folder called [templates](https://github.com/pytorch/pytorch/tree/release/1.9/tools/autograd/templates). These scripts and the templates are used at building time to generate the C++ code for the derivatives as", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "specified in the yaml file. Also, the scripts here generate wrappers for the regular ATen functions so that the computational graph can be constructed.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "[torch/autograd](https://github.com/pytorch/pytorch/tree/release/1.9/torch/autograd): This folder is where the autograd components that can be used directly from python are located. In [function.py](https://github.com/pytorch/pytorch/blob/release/1.9/torch/autograd/function.py) we find the actual definition of `torch.autograd.Function`, a class used by users to write their own differentiable functions in python as per the documentation.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "[functional.py](https://github.com/pytorch/pytorch/blob/release/1.9/torch/autograd/functional.py) holds components for functionally computing the jacobian vector product, hessian, and other gradient related computations of a given function.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "The rest of the files have additional components such as gradient checkers, anomaly detection, and the autograd profiler.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "[torch/csrc/autograd](https://github.com/pytorch/pytorch/tree/release/1.9/torch/csrc/autograd): This is where the graph creation and execution-related code lives.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "All this code is written in C++, since it is a critical part that is required to be extremely performant. Here we have several files that implement the engine, metadata storage, and all the needed components. Alongside this, we have several files whose names start with `python_`, and their main responsibility is to allow python objects to be used in the autograd engine.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "# Graph Creation\n\n[Previously](https://pytorch.org/blog/overview-of-pytorch-autograd-engine/), we described the creation of a computational graph. Now, we will see how PyTorch creates these graphs with references to the actual codebase.\n\n\n
\n
\nFigure 1: Example of an augmented computational graph\n
\n\nIt all starts when in our python code, where we request a tensor to require the gradient.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```py\n>>> x = torch.tensor([0.5, 0.75], requires_grad=True)", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "When the `required_grad` flag is set in tensor creation, c10 will [allocate](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/c10/core/TensorImpl.cpp#L382-L406) an `AutogradMeta` object that is used to hold the graph information.\n\n```c++\n\nvoid TensorImpl::set_requires_grad(bool requires_grad) {\n ...\n if (!autograd_meta_)\n autograd_meta_ = impl::GetAutogradMetaFactory()->make();\n autograd_meta_->set_requires_grad(requires_grad, this);\n}", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "The `AutogradMeta` object is defined in [torch/csrc/autograd/variable.h](https://github.com/pytorch/pytorch/blob/release/1.9/torch/csrc/autograd/variable.h#L190-L286) as follows:\n\n```c++\n\nstruct TORCH_API AutogradMeta : public c10::AutogradMetaInterface {\n std::string name_;\n\n Variable grad_;\n std::shared_ptr grad_fn_;\n std::weak_ptr grad_accumulator_;\n // other fields and methods\n ...\n};", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "The most important fields in this structure are the computed gradient in `grad_` and a pointer to the function `grad_fn` that will be called by the engine to produce the actual gradient. Also, there is a gradient accumulator object that is used to add together all the different gradients where this tensor is involved as we will see in the graph execution.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Graphs, Nodes and Edges.\n\nNow, when we call a differentiable function that takes this tensor as an argument, the associated metadata will be populated. Let\u2019s suppose that we call a regular torch function that is implemented in ATen. Let it be the multiplication as in our previous blog post example. The resulting tensor has a field called `grad_fn` that is essentially a pointer to the function that will be used to compute the gradient of that operation.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```py\n>>> x = torch.tensor([0.5, 0.75], requires_grad=True)\n>>> v = x[0] * x[1]\n>>> v\ntensor(0.3750, grad_fn=)", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Here we see that the tensors\u2019 `grad_fn` has a `MulBackward0` value. This function is the same that was written in the [derivatives.yaml](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/tools/autograd/derivatives.yaml#L840-L843) file, and its C++ code was generated automatically by all the scripts in `tools/autograd`. It\u2019s auto-generated source code can be seen in `torch/csrc/autograd/generated/Functions.cpp`.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\nvariable_list MulBackward0::apply(variable_list&& grads) {\n std::lock_guard lock(mutex_);", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "IndexRangeGenerator gen;\n auto self_ix = gen.range(1);\n auto other_ix = gen.range(1);\n variable_list grad_inputs(gen.size());\n auto& grad = grads[0];\n auto self = self_.unpack();\n auto other = other_.unpack();\n bool any_grad_defined = any_variable_defined(grads);\n if (should_compute_output({ other_ix })) {\n auto grad_result = any_grad_defined ? (mul_tensor_backward(grad, self, other_scalar_type)) : Tensor();\n copy_range(grad_inputs, other_ix, grad_result);\n }", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "if (should_compute_output({ self_ix })) {\n auto grad_result = any_grad_defined ? (mul_tensor_backward(grad, other, self_scalar_type)) : Tensor();\n copy_range(grad_inputs, self_ix, grad_result);\n }\n return grad_inputs;\n}", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "The `grad_fn` objects inherit from the [`TraceableFunction`](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/function.h#L535-L541) class, a descendant of `Node` with just a property set to enable tracing for debugging and optimization purposes. A graph by definition has nodes and edges, so these functions are indeed the nodes of the computational graph that are linked together by using `Edge` objects to enable the graph traversal later on.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "The `Node` definition can be found in the [torch/csrc/autograd/function.h](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/function.h#L50-L533) file.\n\n```c++\nstruct TORCH_API Node : std::enable_shared_from_this {\n ...\n /// Evaluates the function on the given inputs and returns the result of the\n /// function call.\n variable_list operator()(variable_list&& inputs) {\n ...\n }", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "protected:\n /// Performs the `Node`'s actual operation.\n virtual variable_list apply(variable_list&& inputs) = 0;\n \u2026\n edge_list next_edges_;", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Essentially we see that it has an override of the `operator ()` that performs the call to the actual function, and a pure virtual function called `apply`. The automatically generated functions override this `apply` method as we saw in the `MulBackward0` example above. Finally, the node also has a list of edges to enable graph connectivity.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "The [Edge](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/edge.h#L14-L39) object is used to link `Node`s together and its implementation is straightforward.\n\n```c++\nstruct Edge {\n ...\n /// The function this `Edge` points to.\n std::shared_ptr function;\n /// The identifier of a particular input to the function.\n uint32_t input_nr;\n};", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "It only requires a function pointer (the actual `grad_fn` objects that the edges link together), and an input number that acts as an id for the edge.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Linking nodes together\n\nWhen we invoke the product operation of two tensors, we enter into the realm of autogenerated code. All the scripts that we saw in `tools/autograd` fill a series of templates that wrap the differentiable functions in ATen. These functions have code to construct the backward graph during the forward pass.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "The [gen_variable_type.py](https://github.com/pytorch/pytorch/blob/release/1.9/tools/autograd/gen_variable_type.py) script is in charge of writing all this wrapping code. This script is called from the [tools/autograd/gen_autograd.py](https://github.com/pytorch/pytorch/blob/release/1.9/tools/autograd/gen_autograd.py) during the pytorch build process and it will output the automatically generated function wrappers to `torch/csrc/autograd/generated/`.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Let\u2019s take a look at how the tensor multiplication generated function looks like. The code has been simplified, but it can be found in the `torch/csrc/autograd/generated/VariableType_4.cpp` file when compiling pytorch from source.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\nat::Tensor mul_Tensor(c10::DispatchKeySet ks, const at::Tensor & self, const at::Tensor & other) {\n ...\n auto _any_requires_grad = compute_requires_grad( self, other );\n std::shared_ptr grad_fn;\n if (_any_requires_grad) {\n // Creates the link to the actual grad_fn and links the graph for backward traversal\n grad_fn = std::shared_ptr(new MulBackward0(), deleteNode);\n grad_fn->set_next_edges(collect_next_edges( self, other ));\n ...\n }\n \u2026", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "// Does the actual function call to ATen\n auto _tmp = ([&]() {\n at::AutoDispatchBelowADInplaceOrView guard;\n return at::redispatch::mul(ks & c10::after_autograd_keyset, self_, other_);\n })();", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "auto result = std::move(_tmp);\n if (grad_fn) {\n // Connects the result to the graph\n set_history(flatten_tensor_args( result ), grad_fn);\n }\n ...\n return result;\n}", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Let\u2019s walk through the most important lines of this code.\nFirst of all, the `grad_fn` object is created with: ` grad_fn = std::shared_ptr(new MulBackward0(), deleteNode);`.\n\nAfter the `grad_fn` object is created, the edges used to link the nodes together are created by using the `grad_fn->set_next_edges(collect_next_edges( self, other ));` calls.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\nstruct MakeNextFunctionList : IterArgs {\n edge_list next_edges;\n using IterArgs::operator();\n void operator()(const Variable& variable) {\n if (variable.defined()) {\n next_edges.push_back(impl::gradient_edge(variable));\n } else {\n next_edges.emplace_back();\n }\n }\n void operator()(const c10::optional& variable) {\n if (variable.has_value() && variable->defined()) {\n next_edges.push_back(impl::gradient_edge(*variable));", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "} else {\n next_edges.emplace_back();\n }\n }\n};", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "template \nedge_list collect_next_edges(Variables&&... variables) {\n detail::MakeNextFunctionList make;\n make.apply(std::forward(variables)...);\n return std::move(make.next_edges);\n}", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "* **Batch size** can directly affect the latency and the throughput. To better utilize the compute resources batch size needs to be increased. However, there is a tradeoff between latency and throughput. **Larger batch sizes** can **increase** the **throughput but results in a higher latency** as well. Batch size can be set in Torchserve in two ways, either through[ model config](https://github.com/pytorch/serve/blob/c87bfec8916d340de5de5810b14a016049b0e395/docs/configuration.md#config-model) in config.properties or while registering the model using [Management API](https://github.com/pytorch/serve/blob/c87bfec8916d340de5de5810b14a016049b0e395/docs/management_api.md#scale-workers). \n\nIn the next section, we are going to use Torchserve benchmark suite to decide the best combination of model optimization, hardware, workers, and batch size. \n\n## Animated Drawings Performance Tuning", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "To use the Torchserve benchmark suite, first we need to have an archived file, \u201c.mar\u201d file as discussed above, that contains the model, handler and all other artifacts to load and run inference. Animated Drawings uses Detectron2\u2019s implementation of Mask-RCNN for an object detection model. \n\n### How to run benchmark suite \n\nThe [Automated benchmark suite](https://github.com/pytorch/serve/tree/master/benchmarks#auto-benchmarking-with-apache-bench) in Torchserve let you benchmark multiple models with different setting including batch size and number of worker and finally generate a report for you. To get started:\n\n```\ngit clone https://github.com/pytorch/serve.git\n\ncd serve/benchmarks\n\npip install -r requirements-ab.txt\n\napt-get install apache2-utils\n```\n\nModel level settings can be configured in a yaml file similar to \n\n```yaml", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "```yaml\n\nModel_name:\n eager_mode:\n benchmark_engine: \"ab\"\n url: \"Path to .mar file\"\n workers:\n - 1\n - 4\n batch_delay: 100\n batch_size:\n - 1\n - 2\n - 4\n - 8\n requests: 10000\n concurrency: 10\n input: \"Path to model input\"\n backend_profiling: False\n exec_env: \"local\"\n processors:\n - \"cpu\"\n - \"gpus\": \"all\"\n\n```\n\nThis yaml file will be referenced in the [benchmark_config_template](https://github.com/pytorch/serve/blob/master/benchmarks/benchmark_config_template.yaml#L12).yaml file that includes other settings for generating reports, this can optionally work with AWS cloud watch for logs as well.\n\n```\npython benchmarks/auto_benchmark.py --input benchmark_config_template.yaml\n```", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "Running the **benchmarks**, results will be written in \u201ccsv\u201d file that can be found in \u201c_ /tmp/benchmark/ab_report.csv_\u201d and full report \u201c/tmp/ts_benchmark/report.md\". It will include items such as Torchserve average latency, model P99 latency, throughput, number of concurrency, number of requests, handler time, and some other metrics. Here we focus on some of the important ones that we track to tune the performance which are, **concurrency**, **model P99** latency, **throughput**. We look at these numbers specifically in **combination** with **batch size**, the used **device, number of workers** and if any **model optimization** has been done.\n\n\nThe **latency SLA** for this model has been set to **100 ms,** this is real-time application and as we discussed earlier, latency is more of a concern and **throughput** ideally should be as high as possible while it does **not violate** the **latency SLA.**", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "Through searching the space, over different batch sizes (1-32), number of workers (1-16) and devices (CPU,GPU), we have run a set of experiments that summarized the best ones in the table below.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "\n \n Device \n | \n Concurrency \n | \n # Requests\n | \n #workers\n | \n Batch size\n | \n Payload/image\n | \n Optimization \n | \n Throughput \n | \n Latency P99\n | \n
\n \n CPU\n | \n 10\n | \n 1000\n | \n 1\n | \n 1\n | \n small\n | \n N/A\n | \n 3.45\n | \n 305.3 ms\n | \n
\n \n CPU\n | \n 1\n | \n 1000\n | \n 1\n | \n 1\n | \n small\n | \n N/A\n | \n 3.45\n | \n 291.8 ms\n | \n
\n \n GPU\n | \n 10\n | \n 1000\n | \n 1\n | \n 1\n | \n small\n | \n N/A\n | \n 41.05\n | \n 25.48 ms\n | \n
\n \n GPU\n | \n 1\n | \n 1000\n | \n 1\n | \n 1\n | \n small\n | \n N/A\n | \n 42.21\n | \n 23.6 ms\n | \n
\n \n GPU\n | \n 10\n | \n 1000\n | \n 1\n | \n 4\n | \n small\n | \n N/A\n | \n 54.78\n | \n 73.62 ms\n | \n
\n \n GPU\n | \n 10\n | \n 1000\n | \n 1\n | \n 4\n | \n small\n | \n model.half()\n | \n 78.62\n | \n 50.69 ms\n | \n
\n \n GPU\n | \n 10\n | \n 1000\n | \n 1\n | \n 8\n | \n small\n | \n model.half()\n | \n 85.29\n | \n 94.4 ms\n | \n
\n
", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "The latency of this model on CPU with all of the tried settings in terms of batch size, concurrency and number of workers did not meet the SLA, in fact ~13x higher.\n\n**Moving** the model serving **to GPU**, immediately could **improve** the **latency** ~**13x **from 305 ms down to 23.6 ms. \n\nOne of the **simplest** **optimizations** that we could do for the model was lowering its precision to **fp16**, it is one liner (**model.half()**) and could reduce the **model P99 latency **by **32%** and increase the throughput by almost the same amount.\n\nThere could be other optimization done by Torchscripting the model and using [optimize_for_inference](https://github.com/pytorch/pytorch/blob/master/torch/jit/_freeze.py#L168) or other tricks including onnx or tensorrt runtime optimizations which leverage aggressive fusions are out of the scope of this post. We will discuss model optimizations in a separate post.\n\nWe found both on CPU and GPU , setting **number of workers=1 **worked the best in this case.", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "* Moving the model to GPU, using **number of workers = 1**, and **batch size = 1** increased the **Throughput ~12x compared** to **CPU and latency ~13x.**\n* Moving the model to GPU, using **model.half()**, **number of workers = 1**, and **batch size = 8** yielded **best** results in terms of **Throughput** and tolerable latency. **Throughput** increased **~25x compared** to **CPU with latency still meeting the SLA (94.4ms).**\n \n_Note: if you are running the benchmark suite, make sure you are setting a proper `batch_delay` and set the concurrency of the request to a number proportional to your batch size. Concurrency here means the number of concurrent requests being sent to the server._\n\n## Conclusion", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "## Conclusion\n\nIn this post, we have discussed the considerations and knobs that Torchserve expose to tune the performance in production. We have discussed the Torchserve benchmark suite as a means to tune the performance and get insights on possible choices for model optimizations, hardware choice and cost in general. We used Animated Drawings app which uses Detectron2\u2019s Mask-RCNN model as a case-study to showcase the performance tuning with benchmark suite. \n\nFor more details on Performance tuning in Torchserve please refer to our documentation [here](https://github.com/pytorch/serve/blob/master/docs/performance_guide.md).\nAlso feel free to open a ticket on [Torchserve repo](https://github.com/pytorch/serve/issues) for any further questions and feedback. \n\n### Acknowledgement", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "### Acknowledgement\n\nWe would like to thank Somya Jain (Meta), Christopher Gustave (Meta) for their great support and guidance throughout many steps of this blog and providing insights to Sketch Animator workflow. Also, special thanks to[ Li Ning](https://www.linkedin.com/in/li-ning-7274604/) from AWS for the great efforts to make performance tuning much easier on Torchserve with automated benchmark suite.\n\n\n", "metadata": {"source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"Scaling Vision Model Training Platforms with PyTorch\"\nauthor: Vaibhav Aggarwal, Mannat Singh, Anjali Sridhar, Yanghao Li, Shoubhik Debnath, Ronghang Hu, Will Feng, Xinlei Chen, Tingting Markstrum, Diana Liskovich, Anupam Bhatnagar, Chay Ryali, Haoqi Fan, Tete Xiao, Min Xu, Rahul Iyer, Christoph Feichtenhofer, Ross Girshick, Piotr Dollar, Aaron Adcock, Wan-Yen Lo, CK Luk\nfeatured-img: \"/assets/images/scaling-vision-figure_1-solutions-to-the-challenges.png\"\n---\n\n*TL;DR: We demonstrate the use of PyTorch with FairScale\u2019s FullyShardedDataParallel (FSDP) API in writing large vision transformer models. We discuss our techniques for scaling and optimizing these models on a GPU cluster. The goal of this platform scaling effort is to enable research at scale. This blog does not discuss model accuracy, new model architectures, or new training recipes.*\n\n## 1. Introduction", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "## 1. Introduction\n\nLatest vision research [1, 2] demonstrates model scaling as a promising research direction. In this project, we aim to enable our platforms to train massive vision transformer (ViT) [3] models. We present our work on scaling the largest trainable ViT from 1B to 120B parameters in FAIR vision platforms. We wrote ViT in PyTorch and leveraged its support for large-scale, distributed training on a GPU cluster.\n\nIn the rest of this blog, we will first discuss the main challenges, namely *scalability*, *optimization*, and *numerical stability*. Then we will discuss how we tackle them with techniques including *data and model parallelism*, *automatic mixed precision*, *kernel fusion*, and *bfloat16*. Finally, we present our results and conclude.\n\n## 2. Main Challenges\n\n### 2.1 Scalability", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "### 2.1 Scalability\n\nThe key scalability challenge is to efficiently shard a model\u2019s operations and state across multiple GPUs. A 100B parameter model requires ~200GB of RAM just for parameters, assuming fp16 representation. So, it is impossible to fit the model on a single GPU (A100 has at most 80GB RAM). Therefore, we need some way to efficiently shard a model\u2019s data (input, parameters, activations, and optimizer state) across multiple GPUs.\n\nAnother aspect of this problem is to scale without significantly changing the training recipe. E.g. Certain representation learning recipes use a global batch size of up to 4096 beyond which we start to see accuracy degradation. We cannot scale to more than 4096 GPUs without using some form of tensor or pipeline parallelism.\n\n### 2.2 Optimization", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "### 2.2 Optimization\n\nThe key optimization challenge is to maintain high GPU utilization even as we scale the number of model parameters and flops. When we scale models to teraflops and beyond, we start to hit major bottlenecks in our software stack that super-linearly increase training time and reduce accelerator utilization. We require hundreds or thousands of GPUs to run just a single experiment. Improvements in accelerator utilization can lead to significant reductions in cost and improve fleet utilization. It enables us to fund more projects and run more experiments in parallel.\n\n### 2.3 Numerical Stability", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "The key stability challenge is to avoid numerical instability and divergence at large scale. We empirically observed in our experiments that the training instability gets severe and hard to deal with when we scale up model sizes, data, batch sizes, learning rate, etc. Vision Transformers particularly face training instability even at a lower parameter threshold. E.g., we find it challenging to train even ViT-H (with just 630M parameters) in mixed-precision mode without using strong data augmentation. We need to study the model properties and training recipes to make sure that the models train stably and converge.\n\n## 3. Our Solutions\n\n**Figure 1** depicts our solutions to each of the challenges.\n\n\n
\n
\n\n### 3.1 Addressing scaling challenges with data parallelism and model parallelism\n\nWe apply various forms of data and model parallelism to enable fitting very large models in GPU memory.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "We use FairScale\u2019s *FullyShardedDataParallel (FSDP)* API [4], based on PyTorch, to shard parameters, gradients, and optimizer state across multiple GPUs, thereby reducing the memory footprint per GPU. This process consists of the following three steps:\n\n- Step 1: We wrapped the entire model in a single FSDP instance. This shards the model parameters at the end of a forward pass and gathers parameters at the beginning of a forward pass. This enabled us to scale ~3x from 1.5B to 4.5B parameters. \n\n- Step 2: We experimented with wrapping individual model layers in separate FSDP instances. This nested wrapping further reduced the memory footprint by sharding and gathering parameters of individual model layers instead of an entire model. The peak memory is then determined by an individually wrapped transformer block in GPU memory in this mode instead of the entire model.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "- Step 3: We used *activation-checkpoint* to reduce the memory consumption by activations. It saves the input tensors and discards the intermediate activation tensors during the forward pass. These are recomputed during the backward pass.\n\nIn addition, we experimented with model-parallelism techniques such as pipeline parallelism [5], which allow us to scale to more GPUs without increasing the batch size.\n\n### 3.2 Addressing optimization challenges with advanced AMP and kernel fusion\n\n#### Advanced AMP\n\nAutomatic Mixed Precision (AMP) [6] training refers to training models using a lower precision of bits than FP32 or the default but still maintaining accuracy. We experimented with three levels of AMP as described below:\n\n- AMP O1: This refers to training in mixed precision where weights are in FP32 and some operations are in FP16. With AMP O1, the ops that might impact accuracy remain in FP32 and are not autocasted to FP16.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "- AMP O2: This refers to training in mixed precision but with more weights and ops in FP16 than in O1. Weights do not implicitly remain in FP32 and are cast to FP16. A copy of the master weights is maintained in the FP32 precision that is used by the optimizer. If we want the normalization layer weights in FP32 then we need to explicitly use layer wrapping to ensure that.\n\n- Full FP16: This refers to training in full FP16 where weights and operations are in FP16. FP16 is challenging to enable for training due to convergence issues.\n\nWe found that AMP O2 with LayerNorm wrapping in FP32 leads to the best performance without sacrificing accuracy.\n\n#### Kernel Fusion\n\n- To reduce GPU kernel launch overhead and increase GPU work granularity, we experimented with kernel fusions, including fused dropout and fused layer-norm, using the [xformers library](https://github.com/facebookresearch/xformers) [7].\n\n### 3.3 Addressing stability challenges by studying ops numerical stability and training recipes", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "#### BFloat16 in general but with LayerNorm in FP32\n\nThe [bfloat16](https://cloud.google.com/tpu/docs/bfloat16) (BF16) [8] floating-point format provides the same dynamic range as FP32 with a memory footprint identical to FP16. We found that we could train models in the BF16 format using the same set of hyperparameters as in FP32, without special parameter tuning. Nevertheless, we found that we need to keep LayerNorm in FP32 mode in order for the training to converge.\n\n### 3.4 Final training recipe\n\nA summary of the final training recipe.", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "1. Wrap the outer model in an FSDP instance. Enable parameter sharding after the forward pass.\n2. Wrap individual ViT blocks with activation checkpointing, nested FSDP wrapping, and parameter flattening.\n3. Enable mixed precision mode (AMP O2) with bfloat16 representation. Maintain the optimizer state in FP32 precision to enhance numerical stability.\n4. Wrap normalization layers like LayerNorm in FP32 for better numerical stability.\n5. Maximize the Nvidia TensorCore utilization by keeping matrix dimensions to be multiple of 8. For More details check [Nvidia Tensor Core Performance Guide](https://developer.download.nvidia.com/video/gputechconf/gtc/2019/presentation/s9926-tensor-core-performance-the-ultimate-guide.pdf).\n\n## 4. Results", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "## 4. Results\n\nIn this section, we show the scaling results of ViT on three types of tasks: (1) image classification, (2) object detection (3) video understanding. **Our key result is that we are able to train massive ViT backbones across these vision tasks after applying the discussed scaling and optimization techniques. This enables vision research at a much larger scale.** We trained the models to convergence to verify that we maintain the current baselines even with all the optimizations. A common trend in Figures 2, 3, 4 is that we are able to train up to 25B-param models with an epoch time of less than 4 hours on 128 A100 GPUs. The 60B and 120B models are relatively slower to train.\n\n**Figure 2** shows the *image-classification* scaling result. It plots the epoch time for training ViTs on ImageNet using 128 A100-80GB GPUs with different model sizes.\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "\nFigure 2: Image-classification scaling result.\n
\n\n**Figure 3** shows the *object-detection* scaling result. It plots the epoch time for training [ViTDet](https://arxiv.org/abs/2203.16527) [9] with different ViT backbones on COCO using 128 A100-80GB GPUs.\n\n\n
\n
\n\n\nFigure 3: Object-detection scaling result.\n
\n\n**Figure 4** shows the *video-understanding* scaling result. It plots the epoch time for training [MViTv2](https://arxiv.org/abs/2112.01526) [10] models on [Kinetics 400](https://www.deepmind.com/open-source/kinetics) [11] using 128 V100 (32 GB) GPUs in FP32.\n\n\n
\n
\n\n\nFigure 4: Video-understanding scaling result.\n
", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "**Figure 5** shows the optimization result with the ViT-H model in Figure 2 on 8 A100-40GB GPUs.\nThree versions are used: (1) the baseline uses PyTorch\u2019s DDP [12] with AMP O1, (2) FSDP + AMP-O2 + other optimizations, and (3) FSDP + FP16 + other optimizations. These optimizations altogether speed up the training by up to 2.2x.\n\n\n
\n
\n\n\nFigure 5: Training speedups from various optimizations.\n
\n\n## 5. Concluding Remarks\n\nWe have demonstrated the use of PyTorch with FairScale\u2019s FullyShardedDataParallel (FSDP) API in writing large vision transformer models. We discuss our techniques for scaling and optimizing these models on a GPU cluster. We hope that this article can motivate others to develop large-scale ML models with PyTorch and its ecosystem.\n\n## References\n\n[1] [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377)", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "[2] [Revisiting Weakly Supervised Pre-Training of Visual Perception Models](https://arxiv.org/abs/2201.08371)\n\n[3] [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929v2)\n\n[4] [fairscale.nn.FullyShardedDataParallel](https://fairscale.readthedocs.io/en/stable/api/nn/fsdp.html)\n\n[5] [Pipeline parallelism in PyTorch](https://pytorch.org/docs/stable/pipeline.html)\n\n[6] [Automatic Mixed Precision (AMP) in PyTorch](https://pytorch.org/docs/stable/amp.html#module-torch.amp)\n\n[7] [xformers](https://github.com/facebookresearch/xformers)\n\n[8] [The bfloat16 numerical format](https://cloud.google.com/tpu/docs/bfloat16)\n\n[9] [Exploring Plain Vision Transformer Backbones for Object Detection](https://arxiv.org/abs/2203.16527)\n\n[10] [MViTv2: Improved Multiscale Vision Transformers for Classification and Detection](https://arxiv.org/abs/2112.01526)\n\n[11] [https://www.deepmind.com/open-source/kinetics](https://www.deepmind.com/open-source/kinetics)", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "[12] [Getting Started with Distributed Data Parallel (DDP)](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html)", "metadata": {"source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'Announcing PyTorch Ecosystem Day'\nauthor: Team PyTorch\n---\n\nWe\u2019re proud to announce our first PyTorch Ecosystem Day. The virtual, one-day event will focus completely on our Ecosystem and Industry PyTorch communities!\n\n\nPyTorch is a deep learning framework of choice for academics and companies, all thanks to its rich ecosystem of tools and strong community. As with our developers, our ecosystem partners play a pivotal role in the development and growth of the community.\n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"}}
+{"page_content": "We will be hosting our first PyTorch Ecosystem Day, a virtual event designed for our ecosystem and industry communities to showcase their work and discover new opportunities to collaborate. \n \nPyTorch Ecosystem Day will be held on April 21, with both a morning and evening session, to ensure we reach our global community. Join us virtually for a day filled with discussions on new developments, trends, challenges, and best practices through keynotes, breakout sessions, and a unique networking opportunity hosted through Gather.Town . \n\n## Event Details\nApril 21, 2021 (Pacific Time)\nFully digital experience \n \n* Morning Session: (EMEA)\nOpening Talks - 8:00 am-9:00 am PT\nPoster Exhibition & Breakout Sessions - 9:00 am-12:00 pm PT \n\n* Evening Session (APAC/US)\nOpening Talks - 3:00 pm-4:00 pm PT\nPoster Exhibition & Breakout Sessions - 3:00 pm-6:00 pm PT \n\n* Networking - 9:00 am-7:00 pm PT", "metadata": {"source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"}}
+{"page_content": "### There are two ways to participate in PyTorch Ecosystem Day:\n \n1. **Poster Exhibition** from the PyTorch ecosystem and industry communities covering a variety of topics. Posters are available for viewing throughout the duration of the event. To be part of the poster exhibition, please see below for submission details. If your poster is accepted, we highly recommend tending your poster during one of the morning or evening sessions or both!\n \n2. **Breakout Sessions** are 40-min sessions freely designed by the community. The breakouts can be talks, demos, tutorials or discussions. Note: you must have an accepted poster to apply for the breakout sessions.", "metadata": {"source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"}}
+{"page_content": "Call for posters now open! [Submit your proposal](https://pytorchecosystemday.fbreg.com/posters) today! Please send us the **title** and **summary** of your projects, tools, and libraries that could benefit PyTorch researchers in academia and industry, application developers, and ML engineers for consideration. The focus must be on academic papers, machine learning research, or open-source projects. Please no sales pitches. **Deadline for submission is March 18, 2021.** \n\nVisit [pytorchecosystemday.fbreg.com](http://pytorchecosystemday.fbreg.com) for more information and we look forward to welcoming you to PyTorch Ecosystem Day on April 21st!", "metadata": {"source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch 1.5 released, new and updated APIs including C++ frontend API parity with Python'\nauthor: Team PyTorch\n---\n\n\nToday, we\u2019re announcing the availability of PyTorch 1.5, along with new and updated libraries. This release includes several major new API additions and improvements. PyTorch now includes a significant update to the C++ frontend, \u2018channels last\u2019 memory format for computer vision models, and a stable release of the distributed RPC framework used for model-parallel training. The release also has new APIs for autograd for hessians and jacobians, and an API that allows the creation of Custom C++ Classes that was inspired by pybind.\n\nYou can find the detailed release notes [here](https://github.com/pytorch/pytorch/releases).\n\n## C++ Frontend API (Stable)\n\nThe C++ frontend API is now at parity with Python, and the features overall have been moved to \u2018stable\u2019 (previously tagged as experimental). Some of the major highlights include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
+{"page_content": "* Now with ~100% coverage and docs for C++ torch::nn module/functional, users can easily translate their model from Python API to C++ API, making the model authoring experience much smoother.\n* Optimizers in C++ had deviated from the Python equivalent: C++ optimizers can\u2019t take parameter groups as input while the Python ones can. Additionally, step function implementations were not exactly the same. With the 1.5 release, C++ optimizers will always behave the same as the Python equivalent.\n* The lack of tensor multi-dim indexing API in C++ is a well-known issue and had resulted in many posts in PyTorch Github issue tracker and forum. The previous workaround was to use a combination of `narrow` / `select` / `index_select` / `masked_select`, which was clunky and error-prone compared to the Python API\u2019s elegant `tensor[:, 0, ..., mask]` syntax. With the 1.5 release, users can use `tensor.index({Slice(), 0, \"...\", mask})` to achieve the same purpose.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
+{"page_content": "## \u2018Channels last\u2019 memory format for Computer Vision models (Experimental)\n\n\u2018Channels last\u2019 memory layout unlocks ability to use performance efficient convolution algorithms and hardware (NVIDIA\u2019s Tensor Cores, FBGEMM, QNNPACK). Additionally, it is designed to automatically propagate through the operators, which allows easy switching between memory layouts.\n\nLearn more [here](https://github.com/pytorch/pytorch/wiki/Writing-memory-format-aware-operators) on how to write memory format aware operators.\n\n## Custom C++ Classes (Experimental)\n\nThis release adds a new API, `torch::class_`, for binding custom C++ classes into TorchScript and Python simultaneously. This API is almost identical in syntax to [pybind11](https://pybind11.readthedocs.io/en/stable/). It allows users to expose their C++ class and its methods to the TorchScript type system and runtime system such that they can instantiate and manipulate arbitrary C++ objects from TorchScript and Python. An example C++ binding:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
+{"page_content": "```python\ntemplate \nstruct MyStackClass : torch::CustomClassHolder {\n std::vector stack_;\n MyStackClass(std::vector init) : stack_(std::move(init)) {}\n\n void push(T x) {\n stack_.push_back(x);\n }\n T pop() {\n auto val = stack_.back();\n stack_.pop_back();\n return val;\n }\n};\n\nstatic auto testStack =\n torch::class_>(\"myclasses\", \"MyStackClass\")\n .def(torch::init>())\n .def(\"push\", &MyStackClass::push)\n .def(\"pop\", &MyStackClass::pop)\n .def(\"size\", [](const c10::intrusive_ptr& self) {\n return self->stack_.size();\n });\n```\n\n Which exposes a class you can use in Python and TorchScript like so:\n\n```python\n@torch.jit.script\ndef do_stacks(s : torch.classes.myclasses.MyStackClass):\n s2 = torch.classes.myclasses.MyStackClass([\"hi\", \"mom\"])\n print(s2.pop()) # \"mom\"\n s2.push(\"foobar\")\n return s2 # [\"hi\", \"foobar\"]\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
+{"page_content": "You can try it out in the tutorial [here](https://pytorch.org/tutorials/advanced/torch_script_custom_classes.html).\n\n\n## Distributed RPC framework APIs (Now Stable)\n\nThe Distributed [RPC framework](https://pytorch.org/docs/stable/rpc.html) was launched as experimental in the 1.4 release and the proposal is to mark Distributed RPC framework as stable and no longer experimental. This work involves a lot of enhancements and bug fixes to make the distributed RPC framework more reliable and robust overall, as well as adding a couple of new features, including profiling support, using TorchScript functions in RPC, and several enhancements for ease of use. Below is an overview of the various APIs within the framework:\n\n### RPC API\nThe RPC API allows users to specify functions to run and objects to be instantiated on remote nodes. These functions are transparently recorded so that gradients can backpropagate through remote nodes using Distributed Autograd.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
+{"page_content": "### Distributed Autograd\nDistributed Autograd connects the autograd graph across several nodes and allows gradients to flow through during the backwards pass. Gradients are accumulated into a context (as opposed to the .grad field as with Autograd) and users must specify their model\u2019s forward pass under a with `dist_autograd.context()` manager in order to ensure that all RPC communication is recorded properly. Currently, only FAST mode is implemented (see [here](https://pytorch.org/docs/stable/rpc/distributed_autograd.html#distributed-autograd-design) for the difference between FAST and SMART modes).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
+{"page_content": "### Distributed Optimizer\nThe distributed optimizer creates RRefs to optimizers on each worker with parameters that require gradients, and then uses the RPC API to run the optimizer remotely. The user must collect all remote parameters and wrap them in an `RRef`, as this is required input to the distributed optimizer. The user must also specify the distributed autograd `context_id` so that the optimizer knows in which context to look for gradients.\n\nLearn more about distributed RPC framework APIs [here](https://pytorch.org/docs/stable/rpc.html).\n\n## New High level autograd API (Experimental)\n\nPyTorch 1.5 brings new functions including jacobian, hessian, jvp, vjp, hvp and vhp to the `torch.autograd.functional` submodule. This feature builds on the current API and allows the user to easily perform these functions.\n\nDetailed design discussion on GitHub can be found [here](https://github.com/pytorch/pytorch/issues/30632).\n\n## Python 2 no longer supported", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
+{"page_content": "Starting PyTorch 1.5.0, we will no longer support Python 2, specifically version 2.7. Going forward support for Python will be limited to Python 3, specifically Python 3.5, 3.6, 3.7 and 3.8 (first enabled in PyTorch 1.4.0).\n\n\n*We\u2019d like to thank the entire PyTorch team and the community for all their contributions to this work.*\n\nCheers!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'OpenMined and PyTorch partner to launch fellowship funding for privacy-preserving ML community'\nauthor: Andrew Trask (OpenMined/U.Oxford), Shubho Sengupta, Laurens van der Maaten, Joe Spisak\nexcerpt: Many applications of machine learning (ML) pose a range of security and privacy challenges.\n---\n\n\n

\n
\n\nMany applications of machine learning (ML) pose a range of security and privacy challenges. In particular, users may not be willing or allowed to share their data, which prevents them from taking full advantage of ML platforms like PyTorch. To take the field of privacy-preserving ML (PPML) forward, OpenMined and PyTorch are announcing plans to jointly develop a combined platform to accelerate PPML research as well as new funding for fellowships.", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
+{"page_content": "There are many techniques attempting to solve the problem of privacy in ML, each at various levels of maturity. These include (1) homomorphic encryption, (2) secure multi-party computation, (3) trusted execution environments, (4) on-device computation, (5) federated learning with secure aggregation, and (6) differential privacy. Additionally, a number of open source projects implementing these techniques were created with the goal of enabling research at the intersection of privacy, security, and ML. Among them, PySyft and CrypTen have taken an \u201cML-first\u201d approach by presenting an API that is familiar to the ML community, while masking the complexities of privacy and security protocols. We are excited to announce that these two projects are now collaborating closely to build a mature PPML ecosystem around PyTorch.", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
+{"page_content": "Additionally, to bolster this ecosystem and take the field of privacy preserving ML forward, we are also calling for contributions and supporting research efforts on this combined platform by providing funding to support the OpenMined community and the researchers that contribute, build proofs of concepts and desire to be on the cutting edge of how privacy-preserving technology is applied. We will provide funding through the [RAAIS Foundation](https://www.raais.org/), a non-profit organization with a mission to advance education and research in artificial intelligence for the common good. We encourage interested parties to apply to one or more of the fellowships listed below.\n\n## Tools Powering the Future of Privacy-Preserving ML", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
+{"page_content": "The next generation of privacy-preserving open source tools enable ML researchers to easily experiment with ML models using secure computing techniques without needing to be cryptography experts. By integrating with PyTorch, PySyft and CrypTen offer familiar environments for ML developers to research and apply these techniques as part of their work.\n\n**PySyft** is a Python library for secure and private ML developed by the OpenMined community. It is a flexible, easy-to-use library that makes secure computation techniques like [multi-party computation (MPC)](https://en.wikipedia.org/wiki/Secure_multi-party_computation) and privacy-preserving techniques like [differential privacy](https://en.wikipedia.org/wiki/Differential_privacy) accessible to the ML community. It prioritizes ease of use and focuses on integrating these techniques into end-user use cases like federated learning with mobile phones and other edge devices, encrypted ML as a service, and privacy-preserving data science.", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
+{"page_content": "**CrypTen** is a framework built on PyTorch that enables private and secure ML for the PyTorch community. It is the first step along the journey towards a privacy-preserving mode in PyTorch that will make secure computing techniques accessible beyond cryptography researchers. It currently implements [secure multiparty computation](https://en.wikipedia.org/wiki/Secure_multi-party_computation) with the goal of offering other secure computing backends in the near future. Other benefits to ML researchers include:\n\n* It is **ML first** and presents secure computing techniques via a CrypTensor object that looks and feels exactly like a PyTorch Tensor. This allows the user to use automatic differentiation and neural network modules akin to those in PyTorch.\n* The framework focuses on **scalability and performance** and is built with real-world challenges in mind.", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
+{"page_content": "The focus areas for CrypTen and PySyft are naturally aligned and complement each other. The former focuses on building support for various secure and privacy preserving techniques on PyTorch through an encrypted tensor abstraction, while the latter focuses on end user use cases like deployment on edge devices and a user friendly data science platform.\n\nWorking together will enable PySyft to use CrypTen as a backend for encrypted tensors. This can lead to an increase in performance for PySyft and the adoption of CrypTen as a runtime by PySyft\u2019s userbase. In addition to this, PyTorch is also adding cryptography friendly features such as support for cryptographically secure random number generation. Over the long run, this allows each library to focus exclusively on its core competencies while enjoying the benefits of the synergistic relationship.\n\n## New Funding for OpenMined Contributors", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
+{"page_content": "We are especially excited to announce that the PyTorch team has invested $250,000 to support OpenMined in furthering the development and proliferation of privacy-preserving ML. This gift will be facilitated via the [RAAIS Foundation](https://www.raais.org/) and will be available immediately to support paid fellowship grants for the OpenMined community.\n\n## How to get involved\n\nThanks to the support from the PyTorch team, OpenMined is able to offer three different opportunities for you to participate in the project\u2019s development. Each of these fellowships furthers our shared mission to lower the barrier-to-entry for privacy-preserving ML and to create a more privacy-preserving world.\n\n### Core PySyft CrypTen Integration Fellowships", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
+{"page_content": "During these fellowships, we will integrate CrypTen as a supported backend for encrypted computation in PySyft. This will allow for the high-performance, secure multi-party computation capabilities of CrypTen to be used alongside other important tools in PySyft such as differential privacy and federated learning. For more information on the roadmap and how to apply for a paid fellowship, check out the project\u2019s [call for contributors](https://blog.openmined.org/openmined-pytorch-fellowship-crypten-project).\n\n### Federated Learning on Mobile, Web, and IoT Devices", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
+{"page_content": "During these fellowships, we will be extending PyTorch with the ability to perform federated learning across mobile, web, and IoT devices. To this end, a PyTorch front-end will be able to coordinate across federated learning backends that run in Javascript, Kotlin, Swift, and Python. Furthermore, we will also extend PySyft with the ability to coordinate these backends using peer-to-peer connections, providing low latency and the ability to run secure aggregation as a part of the protocol. For more information on the roadmap and how to apply for a paid fellowship, check out the project\u2019s [call for contributors](https://blog.openmined.org/announcing-the-pytorch-openmined-federated-learning-fellowships).\n\n### Development Challenges", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
+{"page_content": "Over the coming months, we will issue regular open competitions for increasing the performance and security of the PySyft and PyGrid codebases. For performance-related challenges, contestants will compete (for a cash prize) to make a specific PySyft demo (such as federated learning) as fast as possible. For security-related challenges, contestants will compete to hack into a PyGrid server. The first to demonstrate their ability will win the cash bounty! For more information on the challenges and to sign up to receive emails when each challenge is opened, [sign up here](http://blog.openmined.org/announcing-the-openmined-pytorch-development-challenges).\n\nTo apply, select one of the above projects and identify a role that matches your strengths!\n\nCheers,\n\nAndrew, Laurens, Joe, and Shubho", "metadata": {"source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'How Computational Graphs are Constructed in PyTorch'\nauthor: Preferred Networks\nfeatured-img: 'assets/images/augmented_computational_graph.png'\n---\n\nIn the previous [post](https://pytorch.org/blog/overview-of-pytorch-autograd-engine/) we went over the theoretical foundations of automatic differentiation and reviewed the implementation in PyTorch. In this post, we will be showing the parts of PyTorch involved in creating the graph and executing it. In order to understand the following contents, please read @ezyang\u2019s wonderful [blog post](http://blog.ezyang.com/2019/05/pytorch-internals/) about PyTorch internals.\n\n# Autograd components\n\nFirst of all, let\u2019s look at where the different components of autograd live:", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "[tools/autograd](https://github.com/pytorch/pytorch/tree/release/1.9/tools/autograd): Here we can find the definition of the derivatives as we saw in the previous post [derivatives.yaml](https://github.com/pytorch/pytorch/blob/release/1.9/tools/autograd/derivatives.yaml), several python scripts and a folder called [templates](https://github.com/pytorch/pytorch/tree/release/1.9/tools/autograd/templates). These scripts and the templates are used at building time to generate the C++ code for the derivatives as specified in the yaml file. Also, the scripts here generate wrappers for the regular ATen functions so that the computational graph can be constructed.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "[torch/autograd](https://github.com/pytorch/pytorch/tree/release/1.9/torch/autograd): This folder is where the autograd components that can be used directly from python are located. In [function.py](https://github.com/pytorch/pytorch/blob/release/1.9/torch/autograd/function.py) we find the actual definition of `torch.autograd.Function`, a class used by users to write their own differentiable functions in python as per the documentation. [functional.py](https://github.com/pytorch/pytorch/blob/release/1.9/torch/autograd/functional.py) holds components for functionally computing the jacobian vector product, hessian, and other gradient related computations of a given function.\nThe rest of the files have additional components such as gradient checkers, anomaly detection, and the autograd profiler.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "[torch/csrc/autograd](https://github.com/pytorch/pytorch/tree/release/1.9/torch/csrc/autograd): This is where the graph creation and execution-related code lives.\nAll this code is written in C++, since it is a critical part that is required to be extremely performant. Here we have several files that implement the engine, metadata storage, and all the needed components. Alongside this, we have several files whose names start with `python_`, and their main responsibility is to allow python objects to be used in the autograd engine.\n\n# Graph Creation\n\n[Previously](https://pytorch.org/blog/overview-of-pytorch-autograd-engine/), we described the creation of a computational graph. Now, we will see how PyTorch creates these graphs with references to the actual codebase.\n\n\n
\n
\nFigure 1: Example of an augmented computational graph\n
", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "It all starts when in our python code, where we request a tensor to require the gradient.\n\n```py\n>>> x = torch.tensor([0.5, 0.75], requires_grad=True)\n```\n\nWhen the `required_grad` flag is set in tensor creation, c10 will [allocate](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/c10/core/TensorImpl.cpp#L382-L406) an `AutogradMeta` object that is used to hold the graph information.\n\n```c++\n\nvoid TensorImpl::set_requires_grad(bool requires_grad) {\n ...\n if (!autograd_meta_)\n autograd_meta_ = impl::GetAutogradMetaFactory()->make();\n autograd_meta_->set_requires_grad(requires_grad, this);\n}\n```\n\n\nThe `AutogradMeta` object is defined in [torch/csrc/autograd/variable.h](https://github.com/pytorch/pytorch/blob/release/1.9/torch/csrc/autograd/variable.h#L190-L286) as follows:\n\n```c++\n\nstruct TORCH_API AutogradMeta : public c10::AutogradMetaInterface {\n std::string name_;", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Variable grad_;\n std::shared_ptr grad_fn_;\n std::weak_ptr grad_accumulator_;\n // other fields and methods\n ...\n};\n```\n\nThe most important fields in this structure are the computed gradient in `grad_` and a pointer to the function `grad_fn` that will be called by the engine to produce the actual gradient. Also, there is a gradient accumulator object that is used to add together all the different gradients where this tensor is involved as we will see in the graph execution.\n\n### Graphs, Nodes and Edges.\n\nNow, when we call a differentiable function that takes this tensor as an argument, the associated metadata will be populated. Let\u2019s suppose that we call a regular torch function that is implemented in ATen. Let it be the multiplication as in our previous blog post example. The resulting tensor has a field called `grad_fn` that is essentially a pointer to the function that will be used to compute the gradient of that operation.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```py\n>>> x = torch.tensor([0.5, 0.75], requires_grad=True)\n>>> v = x[0] * x[1]\n>>> v\ntensor(0.3750, grad_fn=)\n```\n\nHere we see that the tensors\u2019 `grad_fn` has a `MulBackward0` value. This function is the same that was written in the [derivatives.yaml](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/tools/autograd/derivatives.yaml#L840-L843) file, and its C++ code was generated automatically by all the scripts in `tools/autograd`. It\u2019s auto-generated source code can be seen in `torch/csrc/autograd/generated/Functions.cpp`.\n\n```c++\nvariable_list MulBackward0::apply(variable_list&& grads) {\n std::lock_guard lock(mutex_);", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "IndexRangeGenerator gen;\n auto self_ix = gen.range(1);\n auto other_ix = gen.range(1);\n variable_list grad_inputs(gen.size());\n auto& grad = grads[0];\n auto self = self_.unpack();\n auto other = other_.unpack();\n bool any_grad_defined = any_variable_defined(grads);\n if (should_compute_output({ other_ix })) {\n auto grad_result = any_grad_defined ? (mul_tensor_backward(grad, self, other_scalar_type)) : Tensor();\n copy_range(grad_inputs, other_ix, grad_result);\n }\n if (should_compute_output({ self_ix })) {\n auto grad_result = any_grad_defined ? (mul_tensor_backward(grad, other, self_scalar_type)) : Tensor();\n copy_range(grad_inputs, self_ix, grad_result);\n }\n return grad_inputs;\n}\n```", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "The `grad_fn` objects inherit from the [`TraceableFunction`](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/function.h#L535-L541) class, a descendant of `Node` with just a property set to enable tracing for debugging and optimization purposes. A graph by definition has nodes and edges, so these functions are indeed the nodes of the computational graph that are linked together by using `Edge` objects to enable the graph traversal later on.\n\nThe `Node` definition can be found in the [torch/csrc/autograd/function.h](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/function.h#L50-L533) file.\n\n```c++\nstruct TORCH_API Node : std::enable_shared_from_this {\n ...\n /// Evaluates the function on the given inputs and returns the result of the\n /// function call.\n variable_list operator()(variable_list&& inputs) {\n ...\n }", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "protected:\n /// Performs the `Node`'s actual operation.\n virtual variable_list apply(variable_list&& inputs) = 0;\n \u2026\n edge_list next_edges_;\n```\n\nEssentially we see that it has an override of the `operator ()` that performs the call to the actual function, and a pure virtual function called `apply`. The automatically generated functions override this `apply` method as we saw in the `MulBackward0` example above. Finally, the node also has a list of edges to enable graph connectivity.\n\nThe [Edge](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/edge.h#L14-L39) object is used to link `Node`s together and its implementation is straightforward.\n\n```c++\nstruct Edge {\n ...\n /// The function this `Edge` points to.\n std::shared_ptr function;\n /// The identifier of a particular input to the function.\n uint32_t input_nr;\n};\n```", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "It only requires a function pointer (the actual `grad_fn` objects that the edges link together), and an input number that acts as an id for the edge.\n\n### Linking nodes together\n\nWhen we invoke the product operation of two tensors, we enter into the realm of autogenerated code. All the scripts that we saw in `tools/autograd` fill a series of templates that wrap the differentiable functions in ATen. These functions have code to construct the backward graph during the forward pass.\n\nThe [gen_variable_type.py](https://github.com/pytorch/pytorch/blob/release/1.9/tools/autograd/gen_variable_type.py) script is in charge of writing all this wrapping code. This script is called from the [tools/autograd/gen_autograd.py](https://github.com/pytorch/pytorch/blob/release/1.9/tools/autograd/gen_autograd.py) during the pytorch build process and it will output the automatically generated function wrappers to `torch/csrc/autograd/generated/`.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Let\u2019s take a look at how the tensor multiplication generated function looks like. The code has been simplified, but it can be found in the `torch/csrc/autograd/generated/VariableType_4.cpp` file when compiling pytorch from source.\n\n```c++\nat::Tensor mul_Tensor(c10::DispatchKeySet ks, const at::Tensor & self, const at::Tensor & other) {\n ...\n auto _any_requires_grad = compute_requires_grad( self, other );\n std::shared_ptr grad_fn;\n if (_any_requires_grad) {\n // Creates the link to the actual grad_fn and links the graph for backward traversal\n grad_fn = std::shared_ptr(new MulBackward0(), deleteNode);\n grad_fn->set_next_edges(collect_next_edges( self, other ));\n ...\n }\n \u2026\n // Does the actual function call to ATen\n auto _tmp = ([&]() {\n at::AutoDispatchBelowADInplaceOrView guard;\n return at::redispatch::mul(ks & c10::after_autograd_keyset, self_, other_);\n })();", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "auto result = std::move(_tmp);\n if (grad_fn) {\n // Connects the result to the graph\n set_history(flatten_tensor_args( result ), grad_fn);\n }\n ...\n return result;\n}\n```\n\nLet\u2019s walk through the most important lines of this code.\nFirst of all, the `grad_fn` object is created with: ` grad_fn = std::shared_ptr(new MulBackward0(), deleteNode);`.\n\nAfter the `grad_fn` object is created, the edges used to link the nodes together are created by using the `grad_fn->set_next_edges(collect_next_edges( self, other ));` calls.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```c++\nstruct MakeNextFunctionList : IterArgs {\n edge_list next_edges;\n using IterArgs::operator();\n void operator()(const Variable& variable) {\n if (variable.defined()) {\n next_edges.push_back(impl::gradient_edge(variable));\n } else {\n next_edges.emplace_back();\n }\n }\n void operator()(const c10::optional& variable) {\n if (variable.has_value() && variable->defined()) {\n next_edges.push_back(impl::gradient_edge(*variable));\n } else {\n next_edges.emplace_back();\n }\n }\n};\n\ntemplate \nedge_list collect_next_edges(Variables&&... variables) {\n detail::MakeNextFunctionList make;\n make.apply(std::forward(variables)...);\n return std::move(make.next_edges);\n}\n```", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
{"page_content": "Given an input variable (it\u2019s just a regular tensor), [`collect_next_edges`](\nhttps://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/function.h#L597-L603)\n will create an `Edge` object by calling [`impl::gradient_edge`](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/variable.cpp#L228-L240.)", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\n Edge gradient_edge(const Variable& self) {\n // If grad_fn is null (as is the case for a leaf node), we instead\n // interpret the gradient function to be a gradient accumulator, which will\n // accumulate its inputs into the grad property of the variable. These\n // nodes get suppressed in some situations, see \"suppress gradient\n // accumulation\" below. Note that only variables which have `requires_grad =\n // True` can have gradient accumulators.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "if (const auto& gradient = self.grad_fn()) {\n return Edge(gradient, self.output_nr());\n } else {\n return Edge(grad_accumulator(self), 0);\n }\n }", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "To understand how edges work, let\u2019s assume that an early executed function produced two output tensors, both with their `grad_fn` set, each tensor also has an `output_nr` property with the order in which they were returned. When creating the edges for the current `grad_fn`, an `Edge` object per input variable will be created. The edges will point to the variable\u2019s grad_fn and will also track the `output_nr` to establish ids used when traversing the graph. In the case that the input variables are \u201cleaf\u201d,", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "i.e. they were not produced by any differentiable function, they don\u2019t have a `grad_fn` attribute set. A special function called a gradient accumulator is set by default as seen in the above code snippet.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "After the edges are created, the `grad_fn` graph Node object that is being currently created will hold them using the [`set_next_edges`](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/function.h#L258-L263) function. This is what connects `grad_fn`s together, producing the computational graph.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\n void set_next_edges(edge_list&& next_edges) {\n next_edges_ = std::move(next_edges);\n for(const auto& next_edge : next_edges_) {\n update_topological_nr(next_edge);\n }\n }", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Now, the forward pass of the function will execute, and after the execution `set_history` will connect the output tensors to the `grad_fn` Node.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\ninline void set_history(\n at::Tensor& variable,\n const std::shared_ptr& grad_fn) {\n AT_ASSERT(grad_fn);\n if (variable.defined()) {\n // If the codegen triggers this, you most likely want to add your newly added function\n // to the DONT_REQUIRE_DERIVATIVE list in tools/autograd/gen_variable_type.py\n TORCH_INTERNAL_ASSERT(isDifferentiableType(variable.scalar_type()));\n auto output_nr =\n grad_fn->add_input_metadata(variable);", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "impl::set_gradient_edge(variable, {grad_fn, output_nr});\n } else {\n grad_fn->add_input_metadata(Node::undefined_input());\n }\n}", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "[`set_history`](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/functions/utils.h#L58-L72) calls [`set_gradient_edge`](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/variable.cpp#L242-L255), which just copies the grad_fn and the `output_nr` to the `AutogradMeta` object that the tensor has.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\n void set_gradient_edge(const Variable& self, Edge edge) {\n auto* meta = materialize_autograd_meta(self);\n meta->grad_fn_ = std::move(edge.function);\n meta->output_nr_ = edge.input_nr;\n // For views, make sure this new grad_fn_ is not overwritten unless it is necessary\n // in the VariableHooks::grad_fn below.\n // This logic is only relevant for custom autograd Functions for which multiple\n // operations can happen on a given Tensor before its gradient edge is set when", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "// exiting the custom Function.\n auto diff_view_meta = get_view_autograd_meta(self);\n if (diff_view_meta && diff_view_meta->has_bw_view()) {\n diff_view_meta->set_attr_version(self._version());\n }\n }", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "This tensor now will be the input to another function and the above steps will be all repeated. Check the animation below to see how the graph is created.\n\n\n
\n
\nFigure 2: Animation that shows the graph creation\n
", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Registering Python Functions in the graph\n\nWe have seen how autograd creates the graph for the functions included in ATen. However, when we define our differentiable functions in Python, they are also included in the graph!\n\nAn autograd python defined function looks like the following:\n\n```python\nclass Exp(torch.autograd.Function):\n @staticmethod\n def forward(ctx, i):\n result = i.exp()\n ctx.save_for_backward(result)\n return result", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "@staticmethod\n def backward(ctx, grad_output):\n result, = ctx.saved_tensors\n return grad_output * result\n\n# Call the function\nExp.apply(torch.tensor(0.5, requires_grad=True))\n# Outputs: tensor(1.6487, grad_fn=)", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "In the above snippet autograd detected our python function when creating the graph. All of this is possible thanks to the [`Function`](https://github.com/pytorch/pytorch/blob/release/1.9/torch/autograd/function.py#L106) class. Let\u2019s take a look at what happens when we call `apply`.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "`apply` is defined in the [`torch._C._FunctionBase`](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/python_function.cpp#L859-L908) class, but this class is not present in the python source. `_FunctionBase` is defined in C++ by using the python C API to hook C functions together into a single python class. We are looking for a function named", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "[`THPFunction_apply`](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/python_function.cpp#L577-L633).", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\n\nPyObject *THPFunction_apply(PyObject *cls, PyObject *inputs)\n{\n \n // Generates the graph node\n THPObjectPtr backward_cls(PyObject_GetAttrString(cls, \"_backward_cls\"));\n if (!backward_cls) return nullptr;\n THPObjectPtr ctx_obj(PyObject_CallFunctionObjArgs(backward_cls, nullptr));\n if (!ctx_obj) return nullptr;\n THPFunction* ctx = (THPFunction*)ctx_obj.get();\n\n auto cdata = std::shared_ptr(new PyNode(std::move(ctx_obj)), deleteNode);\n ctx->cdata = cdata;", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "// Prepare inputs and allocate context (grad fn)\n // Unpack inputs will collect the edges\n auto info_pair = unpack_input(inputs);\n UnpackedInput& unpacked_input = info_pair.first;\n InputFlags& input_info = info_pair.second;", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "// Initialize backward function (and ctx)\n bool is_executable = input_info.is_executable;\n cdata->set_next_edges(std::move(input_info.next_edges));\n ctx->needs_input_grad = input_info.needs_input_grad.release();\n ctx->is_variable_input = std::move(input_info.is_variable_input);", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "// Prepend ctx to input_tuple, in preparation for static method call\n auto num_args = PyTuple_GET_SIZE(inputs);\n THPObjectPtr ctx_input_tuple(PyTuple_New(num_args + 1));\n if (!ctx_input_tuple) return nullptr;\n Py_INCREF(ctx);\n PyTuple_SET_ITEM(ctx_input_tuple.get(), 0, (PyObject*)ctx);\n for (int i = 0; i < num_args; ++i) {\n PyObject *arg = PyTuple_GET_ITEM(unpacked_input.input_tuple.get(), i);\n Py_INCREF(arg);\n PyTuple_SET_ITEM(ctx_input_tuple.get(), i + 1, arg);\n }", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "// Call forward\n THPObjectPtr tensor_outputs;\n {\n AutoGradMode grad_mode(false);\n THPObjectPtr forward_fn(PyObject_GetAttrString(cls, \"forward\"));\n if (!forward_fn) return nullptr;\n tensor_outputs = PyObject_CallObject(forward_fn, ctx_input_tuple);\n if (!tensor_outputs) return nullptr;\n }", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "// Here is where the outputs gets the tensors tracked\n return process_outputs(cls, cdata, ctx, unpacked_input, inputs, std::move(tensor_outputs),\n is_executable, node);\n END_HANDLE_TH_ERRORS\n}\n```\n \nAlthough this code is hard to read at first due to all the python API calls, it essentially does the same thing as the auto-generated forward functions that we saw for ATen:", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Create a `grad_fn` object.\nCollect the edges to link the current `grad_fn` with the input tensors one.\nExecute the function `forward`.\nAssign the created `grad_fn` to the output tensors metadata.\n\nThe `grad_fn` object is created in:", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\n // Generates the graph node\n THPObjectPtr backward_cls(PyObject_GetAttrString(cls, \"_backward_cls\"));\n if (!backward_cls) return nullptr;\n THPObjectPtr ctx_obj(PyObject_CallFunctionObjArgs(backward_cls, nullptr));\n if (!ctx_obj) return nullptr;\n THPFunction* ctx = (THPFunction*)ctx_obj.get();\n\n auto cdata = std::shared_ptr(new PyNode(std::move(ctx_obj)), deleteNode);\n ctx->cdata = cdata;", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Basically, it asks the python API to get a pointer to the Python object that can execute the user-written function. Then it wraps it into a [`PyNode`](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/python_function.h#L24-L58) object that is a specialized `Node` object that calls the python interpreter with the provided python function when `apply` is executed during the forward pass. Note that in the code `cdata` is the actual `Node` object that is part", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "of the graph. `ctx` is the object that is passed to the python `forward`/`backward` functions and it is used to store autograd related information by both, the user\u2019s function and PyTorch.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "As in the regular C++ functions we also call `collect_next_edges` to track the inputs `grad_fn` objects, but this is done in [`unpack_input`](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/python_function.cpp#L413-L448):", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\ntemplate\nstd::pair unpack_input(PyObject *args) {\n ...\n flags.next_edges = (flags.is_executable ? collect_next_edges(unpacked.input_vars) : edge_list());\n return std::make_pair(std::move(unpacked), std::move(flags));\n}", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "After this, the edges are assigned to the `grad_fn` by just doing `cdata->set_next_edges(std::move(input_info.next_edges));` and the forward function is called through the python interpreter C API.\n\nOnce the output tensors are returned from the forward pass, they are processed and converted to variables inside the [`process_outputs`](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/python_function.cpp#L519-L562) function.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\nPyObject* process_outputs(PyObject *op_obj, const std::shared_ptr& cdata,\n THPFunction* grad_fn, const UnpackedInput& unpacked,\n PyObject *inputs, THPObjectPtr&& raw_output, bool is_executable,\n torch::jit::Node* node) {\n ...\n _wrap_outputs(cdata, grad_fn, unpacked.input_vars, raw_output, outputs, is_executable);\n _trace_post_record(node, op_obj, unpacked.input_vars, outputs, is_inplace, unpack_output);", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "if (is_executable) {\n _save_variables(cdata, grad_fn);\n } ...\n return outputs.release();\n}", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Here, [`_wrap_outputs`](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/python_function.cpp#L302-L346) is in charge of setting the forward outputs `grad_fn` to the newly created one. For this, it calls another `_wrap_outputs` function defined in a different [file](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/custom_function.cpp#L28-L105), so the process here gets a little confusing.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\nstatic void _wrap_outputs(const std::shared_ptr& cdata, THPFunction *self,\n const variable_list &input_vars, PyObject *raw_output, PyObject *outputs, bool is_executable)\n{\n auto cdata_if_executable = is_executable ? cdata : nullptr;\n ...\n\n // Wrap only the tensor outputs.\n // This calls csrc/autograd/custom_function.cpp\n auto wrapped_outputs = _wrap_outputs(input_vars, non_differentiable, dirty_inputs, raw_output_vars, cdata_if_executable);\n...\n}", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```c++\n Edge gradient_edge(const Variable& self) {\n // If grad_fn is null (as is the case for a leaf node), we instead\n // interpret the gradient function to be a gradient accumulator, which will\n // accumulate its inputs into the grad property of the variable. These\n // nodes get suppressed in some situations, see \"suppress gradient\n // accumulation\" below. Note that only variables which have `requires_grad =\n // True` can have gradient accumulators.\n if (const auto& gradient = self.grad_fn()) {\n return Edge(gradient, self.output_nr());\n } else {\n return Edge(grad_accumulator(self), 0);\n }\n }\n```", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "To understand how edges work, let\u2019s assume that an early executed function produced two output tensors, both with their `grad_fn` set, each tensor also has an `output_nr` property with the order in which they were returned. When creating the edges for the current `grad_fn`, an `Edge` object per input variable will be created. The edges will point to the variable\u2019s grad_fn and will also track the `output_nr` to establish ids used when traversing the graph. In the case that the input variables are \u201cleaf\u201d, i.e. they were not produced by any differentiable function, they don\u2019t have a `grad_fn` attribute set. A special function called a gradient accumulator is set by default as seen in the above code snippet.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "After the edges are created, the `grad_fn` graph Node object that is being currently created will hold them using the [`set_next_edges`](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/function.h#L258-L263) function. This is what connects `grad_fn`s together, producing the computational graph.\n\n```c++\n void set_next_edges(edge_list&& next_edges) {\n next_edges_ = std::move(next_edges);\n for(const auto& next_edge : next_edges_) {\n update_topological_nr(next_edge);\n }\n }\n```\n\nNow, the forward pass of the function will execute, and after the execution `set_history` will connect the output tensors to the `grad_fn` Node.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```c++\ninline void set_history(\n at::Tensor& variable,\n const std::shared_ptr& grad_fn) {\n AT_ASSERT(grad_fn);\n if (variable.defined()) {\n // If the codegen triggers this, you most likely want to add your newly added function\n // to the DONT_REQUIRE_DERIVATIVE list in tools/autograd/gen_variable_type.py\n TORCH_INTERNAL_ASSERT(isDifferentiableType(variable.scalar_type()));\n auto output_nr =\n grad_fn->add_input_metadata(variable);\n impl::set_gradient_edge(variable, {grad_fn, output_nr});\n } else {\n grad_fn->add_input_metadata(Node::undefined_input());\n }\n}\n```\n\n[`set_history`](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/functions/utils.h#L58-L72) calls [`set_gradient_edge`](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/variable.cpp#L242-L255), which just copies the grad_fn and the `output_nr` to the `AutogradMeta` object that the tensor has.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```c++\n void set_gradient_edge(const Variable& self, Edge edge) {\n auto* meta = materialize_autograd_meta(self);\n meta->grad_fn_ = std::move(edge.function);\n meta->output_nr_ = edge.input_nr;\n // For views, make sure this new grad_fn_ is not overwritten unless it is necessary\n // in the VariableHooks::grad_fn below.\n // This logic is only relevant for custom autograd Functions for which multiple\n // operations can happen on a given Tensor before its gradient edge is set when\n // exiting the custom Function.\n auto diff_view_meta = get_view_autograd_meta(self);\n if (diff_view_meta && diff_view_meta->has_bw_view()) {\n diff_view_meta->set_attr_version(self._version());\n }\n }\n```\n\nThis tensor now will be the input to another function and the above steps will be all repeated. Check the animation below to see how the graph is created.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
\nFigure 2: Animation that shows the graph creation\n
\n\n### Registering Python Functions in the graph\n\nWe have seen how autograd creates the graph for the functions included in ATen. However, when we define our differentiable functions in Python, they are also included in the graph!\n\nAn autograd python defined function looks like the following:\n\n```python\nclass Exp(torch.autograd.Function):\n @staticmethod\n def forward(ctx, i):\n result = i.exp()\n ctx.save_for_backward(result)\n return result\n\n @staticmethod\n def backward(ctx, grad_output):\n result, = ctx.saved_tensors\n return grad_output * result\n\n# Call the function\nExp.apply(torch.tensor(0.5, requires_grad=True))\n# Outputs: tensor(1.6487, grad_fn=)\n```", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "In the above snippet autograd detected our python function when creating the graph. All of this is possible thanks to the [`Function`](https://github.com/pytorch/pytorch/blob/release/1.9/torch/autograd/function.py#L106) class. Let\u2019s take a look at what happens when we call `apply`.\n\n`apply` is defined in the [`torch._C._FunctionBase`](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/python_function.cpp#L859-L908) class, but this class is not present in the python source. `_FunctionBase` is defined in C++ by using the python C API to hook C functions together into a single python class. We are looking for a function named [`THPFunction_apply`](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/python_function.cpp#L577-L633). \n\n```c++", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```c++\n\nPyObject *THPFunction_apply(PyObject *cls, PyObject *inputs)\n{\n \n // Generates the graph node\n THPObjectPtr backward_cls(PyObject_GetAttrString(cls, \"_backward_cls\"));\n if (!backward_cls) return nullptr;\n THPObjectPtr ctx_obj(PyObject_CallFunctionObjArgs(backward_cls, nullptr));\n if (!ctx_obj) return nullptr;\n THPFunction* ctx = (THPFunction*)ctx_obj.get();\n\n auto cdata = std::shared_ptr(new PyNode(std::move(ctx_obj)), deleteNode);\n ctx->cdata = cdata;\n\n // Prepare inputs and allocate context (grad fn)\n // Unpack inputs will collect the edges\n auto info_pair = unpack_input(inputs);\n UnpackedInput& unpacked_input = info_pair.first;\n InputFlags& input_info = info_pair.second;\n\n // Initialize backward function (and ctx)\n bool is_executable = input_info.is_executable;\n cdata->set_next_edges(std::move(input_info.next_edges));\n ctx->needs_input_grad = input_info.needs_input_grad.release();\n ctx->is_variable_input = std::move(input_info.is_variable_input);", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "// Prepend ctx to input_tuple, in preparation for static method call\n auto num_args = PyTuple_GET_SIZE(inputs);\n THPObjectPtr ctx_input_tuple(PyTuple_New(num_args + 1));\n if (!ctx_input_tuple) return nullptr;\n Py_INCREF(ctx);\n PyTuple_SET_ITEM(ctx_input_tuple.get(), 0, (PyObject*)ctx);\n for (int i = 0; i < num_args; ++i) {\n PyObject *arg = PyTuple_GET_ITEM(unpacked_input.input_tuple.get(), i);\n Py_INCREF(arg);\n PyTuple_SET_ITEM(ctx_input_tuple.get(), i + 1, arg);\n }\n\n // Call forward\n THPObjectPtr tensor_outputs;\n {\n AutoGradMode grad_mode(false);\n THPObjectPtr forward_fn(PyObject_GetAttrString(cls, \"forward\"));\n if (!forward_fn) return nullptr;\n tensor_outputs = PyObject_CallObject(forward_fn, ctx_input_tuple);\n if (!tensor_outputs) return nullptr;\n }", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "// Here is where the outputs gets the tensors tracked\n return process_outputs(cls, cdata, ctx, unpacked_input, inputs, std::move(tensor_outputs),\n is_executable, node);\n END_HANDLE_TH_ERRORS\n}\n```\n \nAlthough this code is hard to read at first due to all the python API calls, it essentially does the same thing as the auto-generated forward functions that we saw for ATen:\n\nCreate a `grad_fn` object.\nCollect the edges to link the current `grad_fn` with the input tensors one.\nExecute the function `forward`.\nAssign the created `grad_fn` to the output tensors metadata.\n\nThe `grad_fn` object is created in:\n\n```c++\n // Generates the graph node\n THPObjectPtr backward_cls(PyObject_GetAttrString(cls, \"_backward_cls\"));\n if (!backward_cls) return nullptr;\n THPObjectPtr ctx_obj(PyObject_CallFunctionObjArgs(backward_cls, nullptr));\n if (!ctx_obj) return nullptr;\n THPFunction* ctx = (THPFunction*)ctx_obj.get();", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "auto cdata = std::shared_ptr(new PyNode(std::move(ctx_obj)), deleteNode);\n ctx->cdata = cdata;\n```\n\nBasically, it asks the python API to get a pointer to the Python object that can execute the user-written function. Then it wraps it into a [`PyNode`](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/python_function.h#L24-L58) object that is a specialized `Node` object that calls the python interpreter with the provided python function when `apply` is executed during the forward pass. Note that in the code `cdata` is the actual `Node` object that is part of the graph. `ctx` is the object that is passed to the python `forward`/`backward` functions and it is used to store autograd related information by both, the user\u2019s function and PyTorch.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "As in the regular C++ functions we also call `collect_next_edges` to track the inputs `grad_fn` objects, but this is done in [`unpack_input`](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/python_function.cpp#L413-L448):\n\n```c++\ntemplate\nstd::pair unpack_input(PyObject *args) {\n ...\n flags.next_edges = (flags.is_executable ? collect_next_edges(unpacked.input_vars) : edge_list());\n return std::make_pair(std::move(unpacked), std::move(flags));\n}\n```\n\nAfter this, the edges are assigned to the `grad_fn` by just doing `cdata->set_next_edges(std::move(input_info.next_edges));` and the forward function is called through the python interpreter C API.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Once the output tensors are returned from the forward pass, they are processed and converted to variables inside the [`process_outputs`](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/python_function.cpp#L519-L562) function.\n\n```c++\nPyObject* process_outputs(PyObject *op_obj, const std::shared_ptr& cdata,\n THPFunction* grad_fn, const UnpackedInput& unpacked,\n PyObject *inputs, THPObjectPtr&& raw_output, bool is_executable,\n torch::jit::Node* node) {\n ...\n _wrap_outputs(cdata, grad_fn, unpacked.input_vars, raw_output, outputs, is_executable);\n _trace_post_record(node, op_obj, unpacked.input_vars, outputs, is_inplace, unpack_output);\n if (is_executable) {\n _save_variables(cdata, grad_fn);\n } ...\n return outputs.release();\n}\n```", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Here, [`_wrap_outputs`](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/python_function.cpp#L302-L346) is in charge of setting the forward outputs `grad_fn` to the newly created one. For this, it calls another `_wrap_outputs` function defined in a different [file](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/torch/csrc/autograd/custom_function.cpp#L28-L105), so the process here gets a little confusing.\n\n```c++\nstatic void _wrap_outputs(const std::shared_ptr& cdata, THPFunction *self,\n const variable_list &input_vars, PyObject *raw_output, PyObject *outputs, bool is_executable)\n{\n auto cdata_if_executable = is_executable ? cdata : nullptr;\n ...\n\n // Wrap only the tensor outputs.\n // This calls csrc/autograd/custom_function.cpp\n auto wrapped_outputs = _wrap_outputs(input_vars, non_differentiable, dirty_inputs, raw_output_vars, cdata_if_executable);\n...\n}\n```", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
{"page_content": "The called `_wrap_outputs` is the one in charge of setting the autograd metadata in the output tensors:\n\n```c++\nstd::vector> _wrap_outputs(const variable_list &input_vars,\n const std::unordered_set &non_differentiable,\n const std::unordered_set &dirty_inputs,\n const at::ArrayRef> raw_outputs,\n const std::shared_ptr &cdata) {", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "std::unordered_set inputs;\n \u2026\n // Sets the grad_fn and output_nr of an output Variable.\n auto set_history = [&](Variable& var, uint32_t output_nr, bool is_input, bool is_modified,\n bool is_differentiable) {\n // Lots of checks\n if (!is_differentiable) {\n ...\n } else if (is_input) {\n // An input has been returned, but it wasn't modified. Return it as a view\n // so that we can attach a new grad_fn to the Variable.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "// Run in no_grad mode to mimic the behavior of the forward.\n {\n AutoGradMode grad_mode(false);\n var = var.view_as(var);\n }\n impl::set_gradient_edge(var, {cdata, output_nr});\n } else if (cdata) {\n impl::set_gradient_edge(var, {cdata, output_nr});\n }\n };", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "And this is where `set_gradient_edge` was called and this is how a user-written python function gets included in the computational graph with its associated backward function!\n\n# Closing remarks\n\nThis blog post is intended to be a code overview on how PyTorch constructs the actual computational graphs that we discussed in the previous post. The next entry will deal with how the autograd engine executes these graphs.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"Efficient Multi-Objective Neural Architecture Search with Ax\"\nauthor: David Eriksson, Max Balandat\nfeatured-img: \"/assets/images/MOO-NAS-blog-img2-pareto_frontier_plot.png\"\n---", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "tl;dr\n\nMulti-Objective Optimization in Ax enables efficient exploration of tradeoffs (e.g. between model performance and model size or latency) in Neural Architecture Search. This method has been successfully applied at Meta for a variety of products such as On-Device AI. In this post, we provide an [end-to-end](https://pytorch.org/tutorials/intermediate/ax_multiobjective_nas_tutorial.html) tutorial that allows you to try it out yourself.", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "Introduction\n\nNeural networks continue to grow in both size and complexity. Developing state-of-the-art architectures is often a cumbersome and time-consuming process that requires both domain expertise and large engineering efforts. In an attempt to overcome these challenges, several Neural Architecture Search (NAS) approaches have been proposed to automatically design well-performing architectures without requiring a human in-the-loop.", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "Despite being very sample-inefficient, na\u00efve approaches like random search and grid search are still popular for both hyperparameter optimization and NAS (a [study](https://hal.archives-ouvertes.fr/hal-02447823/document) conducted at NeurIPS 2019 and ICLR 2020 found that 80% of NeurIPS papers and 88% of ICLR papers tuned their ML model hyperparameters using manual tuning, random search, or grid search). But as models are often time-consuming to train and may require large amounts of computational resources,", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "minimizing the number of configurations that are evaluated is important.", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
+{"page_content": "std::unordered_set inputs;\n \u2026\n // Sets the grad_fn and output_nr of an output Variable.\n auto set_history = [&](Variable& var, uint32_t output_nr, bool is_input, bool is_modified,\n bool is_differentiable) {\n // Lots of checks\n if (!is_differentiable) {\n ...\n } else if (is_input) {\n // An input has been returned, but it wasn't modified. Return it as a view\n // so that we can attach a new grad_fn to the Variable.\n // Run in no_grad mode to mimic the behavior of the forward.\n {\n AutoGradMode grad_mode(false);\n var = var.view_as(var);\n }\n impl::set_gradient_edge(var, {cdata, output_nr});\n } else if (cdata) {\n impl::set_gradient_edge(var, {cdata, output_nr});\n }\n };\n```\n\nAnd this is where `set_gradient_edge` was called and this is how a user-written python function gets included in the computational graph with its associated backward function!\n\n# Closing remarks", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "# Closing remarks\n\nThis blog post is intended to be a code overview on how PyTorch constructs the actual computational graphs that we discussed in the previous post. The next entry will deal with how the autograd engine executes these graphs.", "metadata": {"source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"Efficient Multi-Objective Neural Architecture Search with Ax\"\nauthor: David Eriksson, Max Balandat\nfeatured-img: \"/assets/images/MOO-NAS-blog-img2-pareto_frontier_plot.png\"\n---\n\n## tl;dr\n\nMulti-Objective Optimization in Ax enables efficient exploration of tradeoffs (e.g. between model performance and model size or latency) in Neural Architecture Search. This method has been successfully applied at Meta for a variety of products such as On-Device AI. In this post, we provide an [end-to-end](https://pytorch.org/tutorials/intermediate/ax_multiobjective_nas_tutorial.html) tutorial that allows you to try it out yourself.\n\n## Introduction", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
+{"page_content": "## Introduction\n\nNeural networks continue to grow in both size and complexity. Developing state-of-the-art architectures is often a cumbersome and time-consuming process that requires both domain expertise and large engineering efforts. In an attempt to overcome these challenges, several Neural Architecture Search (NAS) approaches have been proposed to automatically design well-performing architectures without requiring a human in-the-loop.", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
+{"page_content": "Despite being very sample-inefficient, na\u00efve approaches like random search and grid search are still popular for both hyperparameter optimization and NAS (a [study](https://hal.archives-ouvertes.fr/hal-02447823/document) conducted at NeurIPS 2019 and ICLR 2020 found that 80% of NeurIPS papers and 88% of ICLR papers tuned their ML model hyperparameters using manual tuning, random search, or grid search). But as models are often time-consuming to train and may require large amounts of computational resources, minimizing the number of configurations that are evaluated is important.", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
{"page_content": "[Ax](https://ax.dev/) is a general tool for black-box optimization that allows users to explore large search spaces in a sample-efficient manner using [state-of-the art algorithms such as Bayesian Optimization](http://proceedings.mlr.press/v133/turner21a/turner21a.pdf). At Meta, Ax is used in a variety of domains, including hyperparameter tuning, NAS, identifying optimal product settings through large-scale A/B testing, infrastructure optimization, and designing cutting-edge AR/VR hardware.", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "In many NAS applications, there is a natural tradeoff between multiple metrics of interest. For instance, when deploying models on-device we may want to maximize model performance (e.g., accuracy), while simultaneously minimizing competing metrics such as power consumption, inference latency, or model size, in order to satisfy deployment constraints. In many cases, we have been able to reduce computational requirements or latency of predictions substantially by accepting a small degradation in model", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "performance (in some cases we were able to both increase accuracy and reduce latency!). Principled methods for exploring such tradeoffs efficiently are key enablers of [Sustainable AI](https://arxiv.org/abs/2111.00364).", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "At Meta, we have successfully used [multi-objective Bayesian NAS](https://research.facebook.com/blog/2021/07/optimizing-model-accuracy-and-latency-using-bayesian-multi-objective-neural-architecture-search/) in Ax to explore such tradeoffs. Our methodology is being used routinely for optimizing AR/VR on-device ML models. Beyond NAS applications, we have also developed [MORBO](https://arxiv.org/pdf/2109.10964.pdf) which is a method for high-dimensional multi-objective optimization that can be used to optimize", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "optical systems for augmented reality (AR).", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "Fully automated Multi-Objective NAS with Ax\n\nAx\u2019s Scheduler allows running experiments asynchronously in a closed-loop fashion by continuously deploying trials to an external system, polling for results, leveraging the fetched data to generate more trials, and repeating the process until a stopping condition is met. No human intervention or oversight is required. Features of the Scheduler include:\n\n- Customizability of parallelism, failure tolerance, and many other settings;", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "- A large selection of state-of-the-art optimization algorithms;\n\n- Saving in-progress experiments (to a SQL DB or json) and resuming an experiment from storage;\n\n- Easy extensibility to new backends for running trial evaluations remotely.\n\nThe following illustration from the [Ax scheduler tutorial](https://ax.dev/tutorials/scheduler.html) summarizes how the scheduler interacts with any external system used to run trial evaluations:\n\n", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\nTo run automated NAS with the Scheduler, the main things we need to do are:", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "- Define a [Runner](https://github.com/facebook/Ax/blob/main/ax/core/runner.py#L21), which is responsible for sending off a model with a particular architecture to be trained on a platform of our choice (like Kubernetes, or maybe just a Docker image on our local machine). In the tutorial below, we use TorchX for handling deployment of training jobs.", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "- Define a [Metric](https://github.com/facebook/Ax/blob/main/ax/core/metric.py#L21), which is responsible for fetching the objective metrics (such as accuracy, model size, latency) from the training job. In our tutorial, we use Tensorboard to log data, and so can use the Tensorboard metrics that come bundled with Ax.", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "Tutorial", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "In our tutorial we show how to use Ax to run multi-objective NAS for a simple neural network model on the popular MNIST dataset. While the underlying methodology can be used for more complicated models and larger datasets, we opt for a tutorial that is easily runnable end-to-end on a laptop in less than an hour. In our example, we will tune the widths of two hidden layers, the learning rate, the dropout probability, the batch size, and the number of training epochs. The goal is to trade off performance", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "(accuracy on the validation set) and model size (the number of model parameters) using [multi-objective Bayesian optimization](https://proceedings.neurips.cc/paper/2021/file/11704817e347269b7254e744b5e22dac-Paper.pdf).", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "The tutorial makes use of the following PyTorch libraries:\n\n- [PyTorch Lightning](https://github.com/PyTorchLightning/pytorch-lightning) (specifying the model and training loop)\n\n- [TorchX](https://github.com/pytorch/torchx) (for running training jobs remotely / asynchronously)\n\n- [BoTorch](https://github.com/pytorch/botorch) (the Bayesian optimization library that powers Ax\u2019s algorithms)", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "The complete runnable example is available as a **[PyTorch Tutorial](https://pytorch.org/tutorials/intermediate/ax_multiobjective_nas_tutorial.html)**.", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "Results", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "The final results from the NAS optimization performed in the tutorial can be seen in the tradeoff plot below. Here, each point corresponds to the result of a trial, with the color representing its iteration number, and the star indicating the reference point defined by the thresholds we imposed on the objectives. We see that our method was able to successfully explore the trade-offs between validation accuracy and number of parameters and found both large models with high validation accuracy as well as", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "small models with lower validation accuracy. Depending on the performance requirements and model size constraints, the decision maker can now choose which model to use or analyze further.", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "Visualizations", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "Ax provides a number of visualizations that make it possible to analyze and understand the results of an experiment. Here, we will focus on the performance of the Gaussian process models that model the unknown objectives, which are used to help us discover promising configurations faster. Ax makes it easy to better understand how accurate these models are and how they perform on unseen data via leave-one-out cross-validation. In the figures below, we see that the model fits look quite good - predictions are", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "close to the actual outcomes, and predictive 95% confidence intervals cover the actual outcomes well. Additionally, we observe that the model size `(num_params)` metric is much easier to model than the validation accuracy `(val_acc)` metric.", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "\n\n\n\n\n
\n
\n
\n\n
\n
\n
\n
", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "Takeaways\n\n- We showed how to run a fully automated multi-objective Neural Architecture Search using Ax.\n\n- Using the Ax Scheduler, we were able to run the optimization automatically in a fully asynchronous fashion - this can be done locally (as done in the tutorial) or by deploying trials remotely to a cluster (simply by changing the TorchX scheduler configuration).", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "- The state-of-the-art multi-objective Bayesian optimization algorithms available in Ax allowed us to efficiently explore the tradeoffs between validation accuracy and model size.", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "Advanced Functionality\n\nAx has a number of other advanced capabilities that we did not discuss in our tutorial. Among these are the following:", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "Early Stopping", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "When evaluating a new candidate configuration, partial learning curves are typically available while the NN training job is running. We can use the information contained in the partial curves to identify under-performing trials to stop early in order to free up computational resources for more promising candidates. While not demonstrated in the above tutorial, Ax supports early stopping out-of-the-box - see our [early stopping", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "tutorial](https://ax.dev/versions/latest/tutorials/early_stopping/early_stopping.html) for more details.", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "High-dimensional search spaces", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "In our tutorial, we used Bayesian optimization with a standard Gaussian process in order to keep the runtime low. However, these models typically scale to only about 10-20 tunable parameters. Our new SAASBO method ([paper](https://proceedings.mlr.press/v161/eriksson21a/eriksson21a.pdf), [Ax tutorial](https://ax.dev/tutorials/saasbo.html), [BoTorch tutorial](https://botorch.org/tutorials/saasbo)) is very sample-efficient and enables tuning hundreds of parameters. SAASBO can easily be enabled by passing", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "`use_saasbo=True` to `choose_generation_strategy`.", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "Acknowledgements\n\nWe thank the TorchX team (in particular Kiuk Chung and Tristan Rice) for their help with integrating TorchX with Ax, and the Adaptive Experimentation team @ Meta for their contributions to Ax and BoTorch.", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "References\n\n[D. Eriksson, P. Chuang, S. Daulton, M. Balandat. Optimizing model accuracy and latency using Bayesian multi-objective neural architecture search. Meta Research blog, July 2021.](https://research.facebook.com/blog/2021/07/optimizing-model-accuracy-and-latency-using-bayesian-multi-objective-neural-architecture-search/)", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch Adds New Ecosystem Projects for Encrypted AI and Quantum Computing, Expands PyTorch Hub'\nauthor: Team PyTorch\n---", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
-{"page_content": "The PyTorch ecosystem includes projects, tools, models and libraries from a broad community of researchers in academia and industry, application developers, and ML engineers. The goal of this ecosystem is to support, accelerate, and aid in your exploration with PyTorch and help you push the state of the art, no matter what field you are exploring. Similarly, we are expanding the recently launched PyTorch Hub to further help you discover and reproduce the latest research.", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
-{"page_content": "In this post, we\u2019ll highlight some of the projects that have been added to the PyTorch ecosystem this year and provide some context on the criteria we use to evaluate community projects. We\u2019ll also provide an update on the fast-growing PyTorch Hub and share details on our upcoming PyTorch Summer Hackathon.\n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
-{"page_content": "Recently added ecosystem projects\n\nFrom private AI to quantum computing, we\u2019ve seen the community continue to expand into new and interesting areas. The latest projects include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
-{"page_content": "- [Advertorch](https://github.com/BorealisAI/advertorch): A Python toolbox for adversarial robustness research. The primary functionalities are implemented in PyTorch. Specifically, AdverTorch contains modules for generating adversarial perturbations and defending against adversarial examples, as well as scripts for adversarial training.", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
-{"page_content": "- [botorch](https://botorch.org/): A modular and easily extensible interface for composing Bayesian optimization primitives, including probabilistic models, acquisition functions, and optimizers.\n\n- [Skorch](https://github.com/skorch-dev/skorch): A high-level library for PyTorch that provides full scikit-learn compatibility.\n\n- [PyTorch Geometric](https://github.com/rusty1s/pytorch_geometric): A library for deep learning on irregular input data such as graphs, point clouds, and manifolds.", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
-{"page_content": "- [PySyft](https://github.com/OpenMined/PySyft): A Python library for encrypted, privacy preserving deep learning.\n\n- [PennyLane](https://pennylane.ai/): A library for quantum ML, automatic differentiation, and optimization of hybrid quantum-classical computations.\n\n- [Flair](https://github.com/zalandoresearch/flair): A very simple framework for state-of-the-art natural language processing (NLP).", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
-{"page_content": "What makes a great project?\n\nWhen we review project submissions for the PyTorch ecosystem, we take into account a number of factors that we feel are important and that we would want in the projects we use ourselves. Some of these criteria include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
-{"page_content": "1. *Well-tested:* Users should be confident that ecosystem projects will work well with PyTorch, and include support for CI to ensure that testing is occurring on a continuous basis and the project can run on the latest version of PyTorch.\n2. *Clear utility:* Users should understand where each project fits within the PyTorch ecosystem and the value it brings.\n3. *Permissive licensing:* Users must be able to utilize ecosystem projects without licensing concerns. e.g. BSD-3, Apache-2 and MIT licenses", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
-{"page_content": "4. *Easy onboarding:* Projects need to have support for binary installation options (pip/Conda), clear documentation and a rich set of tutorials (ideally built into Jupyter notebooks).\n5. *Ongoing maintenance:* Project authors need to be committed to supporting and maintaining their projects.\n6. *Community:* Projects should have (or be on track to building) an active, broad-based community.", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
-{"page_content": "If you would like to have your project included in the PyTorch ecosystem and featured on [pytorch.org/ecosystem](http://pytorch.org/ecosystem), please complete the form [here](https://pytorch.org/ecosystem/join). If you've previously submitted a project for consideration and haven't heard back, we promise to get back to you as soon as we can - we've received a lot of submissions!", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch Hub for reproducible research | New models", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
-{"page_content": "Since [launching](https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/) the PyTorch Hub in beta, we\u2019ve received a lot of interest from the community including the contribution of many new models. Some of the latest include [U-Net for Brain MRI](https://pytorch.org/hub/mateuszbuda_brain-segmentation-pytorch_unet/) contributed by researchers at Duke University, [Single Shot Detection](https://pytorch.org/hub/nvidia_deeplearningexamples_ssd/) from NVIDIA and", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
-{"page_content": "[Transformer-XL](https://pytorch.org/hub/huggingface_pytorch-pretrained-bert_transformerXL/) from HuggingFace.", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
-{"page_content": "We\u2019ve seen organic integration of the PyTorch Hub by folks like [paperswithcode](https://paperswithcode.com/), making it even easier for you to try out the state of the art in AI research. In addition, companies like [Seldon](https://github.com/axsaucedo/seldon-core/tree/pytorch_hub/examples/models/pytorchhub) provide production-level support for PyTorch Hub models on top of Kubernetes.", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
-{"page_content": "What are the benefits of contributing a model in the PyTorch Hub?\n\n- *Compatibility:* PyTorch Hub models are prioritized first for testing by the TorchScript and Cloud TPU teams, and used as baselines for researchers across a number of fields.\n\n- *Visibility:* Models in the Hub will be promoted on [pytorch.org](http://pytorch.org/) as well as on [paperswithcode](https://paperswithcode.com/).", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
-{"page_content": "- *Ease of testing and reproducibility:* Each model comes with code, clear preprocessing requirements, and methods/dependencies to run. There is also tight integration with [Google Colab](https://colab.research.google.com/github/pytorch/pytorch.github.io/blob/master/assets/hub/facebookresearch_WSL-Images_resnext.ipynb#scrollTo=LM_l7vXJvnDM), making it a true single click to get started.", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch Hub contributions welcome!\n\nWe are actively looking to grow the PyTorch Hub and welcome contributions. You don\u2019t need to be an original paper author to contribute, and we\u2019d love to see the number of domains and fields broaden. So what types of contributions are we looking for?\n\n- Artifacts of a published or an arXiv paper (or something of a similar nature that serves a different audience \u2014 such as ULMFit) that a large audience would need.\n\n AND\n\n- Reproduces the published results (or better)", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
-{"page_content": "Overall these models are aimed at researchers either trying to reproduce a baseline, or trying to build downstream research on top of the model (such as feature-extraction or fine-tuning) as well as researchers looking for a demo of the paper for subjective evaluation. Please keep this audience in mind when contributing.", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
-{"page_content": "If you are short on inspiration or would just like to find out what the SOTA is an any given field or domain, checkout the Paperswithcode [state-of-the-art gallery](https://paperswithcode.com/sota).", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch Summer Hackathon\n\nWe\u2019ll be hosting the first PyTorch Summer Hackathon next month. We invite you to apply to participate in the in-person hackathon on August 8th to 9th at Facebook's Menlo Park campus. We'll be bringing the community together to work on innovative ML projects that can solve a broad range of complex challenges.", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
-{"page_content": "Applications will be reviewed and accepted on a rolling basis until spaces are filled. For those who cannot join this Hackathon in person, we\u2019ll be following up soon with other ways to participate.\n\nPlease visit [this link to apply](https://www.eventbrite.com/e/pytorch-summer-hackathon-in-menlo-park-registration-63756668913).\n\nThank you for being part of the PyTorch community!\n\n-Team PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models'\nauthor: Chaoyang He, Shen Li, Mahdi Soltanolkotabi, and Salman Avestimehr\nfeatured-img: 'assets/images/pipetransformer_overview.png'\n---", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "In this blog post, we describe the first peer-reviewed research paper that explores accelerating the hybrid of PyTorch DDP (`torch.nn.parallel.DistributedDataParallel`) [1] and Pipeline (`torch.distributed.pipeline`) - [PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models](http://proceedings.mlr.press/v139/he21a.html) (Transformers such as BERT [2] and ViT [3]), published at ICML 2021.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "PipeTransformer leverages automated elastic pipelining for efficient distributed training of Transformer models. In PipeTransformer, we designed an adaptive on-the-fly freeze algorithm that can identify and freeze some layers gradually during training and an elastic pipelining system that can dynamically allocate resources to train the remaining active layers. More specifically, PipeTransformer automatically excludes frozen layers from the pipeline, packs active layers into fewer GPUs, and forks more", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "replicas to increase data-parallel width. We evaluate PipeTransformer using Vision Transformer (ViT) on ImageNet and BERT on SQuAD and GLUE datasets. Our results show that compared to the state-of-the-art baseline, PipeTransformer attains up to 2.83-fold speedup without losing accuracy. We also provide various performance analyses for a more comprehensive understanding of our algorithmic and system-wise design.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Next, we will introduce the background, motivation, our idea, design, and how we implement the algorithm and system with PyTorch Distributed APIs.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "* Paper: [http://proceedings.mlr.press/v139/he21a.html](http://proceedings.mlr.press/v139/he21a.html)\n* Source Code: [https://DistML.ai](https://distml.ai).\n* Slides: [https://docs.google.com/presentation/d/1t6HWL33KIQo2as0nSHeBpXYtTBcy0nXCoLiKd0EashY/edit?usp=sharing](https://docs.google.com/presentation/d/1t6HWL33KIQo2as0nSHeBpXYtTBcy0nXCoLiKd0EashY/edit?usp=sharing)", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "# Introduction\n\n
\n
\nFigure 1: the Parameter Number of Transformer Models Increases Dramatically.\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Large Transformer models [4][5] have powered accuracy breakthroughs in both natural language processing and computer vision. GPT-3 [4] hit a new record high accuracy for nearly all NLP tasks. Vision Transformer (ViT) [3] also achieved 89\\% top-1 accuracy in ImageNet, outperforming state-of-the-art convolutional networks ResNet-152 and EfficientNet. To tackle the growth in model sizes, researchers have proposed various distributed training techniques, including parameter servers [6][7][8], pipeline", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "parallelism [9][10][11][12], intra-layer parallelism [13][14][15], and zero redundancy data-parallel [16].", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Existing distributed training solutions, however, only study scenarios where all model weights are required to be optimized throughout the training (i.e., computation and communication overhead remains relatively static over different iterations). Recent works on progressive training suggest that parameters in neural networks can be trained dynamically:", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "* Freeze Training: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability. NeurIPS 2017\n* Efficient Training of BERT by Progressively Stacking. ICML 2019\n* Accelerating Training of Transformer-Based Language Models with Progressive Layer Dropping. NeurIPS 2020.\n* On the Transformer Growth for Progressive BERT Training. NACCL 2021", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n
\nFigure 2. Interpretable Freeze Training: DNNs converge bottom-up (Results on CIFAR10 using ResNet). Each pane shows layer-by-layer similarity using SVCCA [17][18]", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "For example, in freeze training [17][18], neural networks usually converge from the bottom-up (i.e., not all layers need to be trained all the way through training). Figure 2 shows an example of how weights gradually stabilize during training in this approach. This observation motivates us to utilize freeze training for distributed training of Transformer models to accelerate training by dynamically allocating resources to focus on a shrinking set of active layers. Such a layer freezing strategy is", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "especially pertinent to pipeline parallelism, as excluding consecutive bottom layers from the pipeline can reduce computation, memory, and communication overhead.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\nFigure 3. The process of PipeTransformer\u2019s automated and elastic pipelining to accelerate distributed training of Transformer models\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "We propose PipeTransformer, an elastic pipelining training acceleration framework that automatically reacts to frozen layers by dynamically transforming the scope of the pipelined model and the number of pipeline replicas. To the best of our knowledge, this is the first paper that studies layer freezing in the context of both pipeline and data-parallel training. Figure 3 demonstrates the benefits of such a combination. First, by excluding frozen layers from the pipeline, the same model can be packed into", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "fewer GPUs, leading to both fewer cross-GPU communications and smaller pipeline bubbles. Second, after packing the model into fewer GPUs, the same cluster can accommodate more pipeline replicas, increasing the width of data parallelism. More importantly, the speedups acquired from these two benefits are multiplicative rather than additive, further accelerating the training.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "The design of PipeTransformer faces four major challenges. First, the freeze algorithm must make on-the-fly and adaptive freezing decisions; however, existing work [17][18] only provides a posterior analysis tool. Second, the efficiency of pipeline re-partitioning results is influenced by multiple factors, including partition granularity, cross-partition activation size, and the chunking (the number of micro-batches) in mini-batches, which require reasoning and searching in a large solution space. Third, to", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "dynamically introduce additional pipeline replicas, PipeTransformer must overcome the static nature of collective communications and avoid potentially complex cross-process messaging protocols when onboarding new processes (one pipeline is handled by one process). Finally, caching can save time for repeated forward propagation of frozen layers, but it must be shared between existing pipelines and newly added ones, as the system cannot afford to create and warm up a dedicated cache for each replica.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "In many NAS applications, there is a natural tradeoff between multiple metrics of interest. For instance, when deploying models on-device we may want to maximize model performance (e.g., accuracy), while simultaneously minimizing competing metrics such as power consumption, inference latency, or model size, in order to satisfy deployment constraints. In many cases, we have been able to reduce computational requirements or latency of predictions substantially by accepting a small degradation in model performance (in some cases we were able to both increase accuracy and reduce latency!). Principled methods for exploring such tradeoffs efficiently are key enablers of [Sustainable AI](https://arxiv.org/abs/2111.00364).", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
+{"page_content": "At Meta, we have successfully used [multi-objective Bayesian NAS](https://research.facebook.com/blog/2021/07/optimizing-model-accuracy-and-latency-using-bayesian-multi-objective-neural-architecture-search/) in Ax to explore such tradeoffs. Our methodology is being used routinely for optimizing AR/VR on-device ML models. Beyond NAS applications, we have also developed [MORBO](https://arxiv.org/pdf/2109.10964.pdf) which is a method for high-dimensional multi-objective optimization that can be used to optimize optical systems for augmented reality (AR).\n\n## Fully automated Multi-Objective NAS with Ax\n\nAx\u2019s Scheduler allows running experiments asynchronously in a closed-loop fashion by continuously deploying trials to an external system, polling for results, leveraging the fetched data to generate more trials, and repeating the process until a stopping condition is met. No human intervention or oversight is required. Features of the Scheduler include:", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
+{"page_content": "- Customizability of parallelism, failure tolerance, and many other settings;\n\n- A large selection of state-of-the-art optimization algorithms;\n\n- Saving in-progress experiments (to a SQL DB or json) and resuming an experiment from storage;\n\n- Easy extensibility to new backends for running trial evaluations remotely.\n\nThe following illustration from the [Ax scheduler tutorial](https://ax.dev/tutorials/scheduler.html) summarizes how the scheduler interacts with any external system used to run trial evaluations:\n\n\n\n\n
\n
\n\nTo run automated NAS with the Scheduler, the main things we need to do are:", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
+{"page_content": "- Define a [Runner](https://github.com/facebook/Ax/blob/main/ax/core/runner.py#L21), which is responsible for sending off a model with a particular architecture to be trained on a platform of our choice (like Kubernetes, or maybe just a Docker image on our local machine). In the tutorial below, we use TorchX for handling deployment of training jobs.\n\n- Define a [Metric](https://github.com/facebook/Ax/blob/main/ax/core/metric.py#L21), which is responsible for fetching the objective metrics (such as accuracy, model size, latency) from the training job. In our tutorial, we use Tensorboard to log data, and so can use the Tensorboard metrics that come bundled with Ax.\n\n## Tutorial", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
+{"page_content": "## Tutorial\n\nIn our tutorial we show how to use Ax to run multi-objective NAS for a simple neural network model on the popular MNIST dataset. While the underlying methodology can be used for more complicated models and larger datasets, we opt for a tutorial that is easily runnable end-to-end on a laptop in less than an hour. In our example, we will tune the widths of two hidden layers, the learning rate, the dropout probability, the batch size, and the number of training epochs. The goal is to trade off performance (accuracy on the validation set) and model size (the number of model parameters) using [multi-objective Bayesian optimization](https://proceedings.neurips.cc/paper/2021/file/11704817e347269b7254e744b5e22dac-Paper.pdf).\n\nThe tutorial makes use of the following PyTorch libraries:\n\n- [PyTorch Lightning](https://github.com/PyTorchLightning/pytorch-lightning) (specifying the model and training loop)\n\n- [TorchX](https://github.com/pytorch/torchx) (for running training jobs remotely / asynchronously)", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
+{"page_content": "- [BoTorch](https://github.com/pytorch/botorch) (the Bayesian optimization library that powers Ax\u2019s algorithms)\n\nThe complete runnable example is available as a **[PyTorch Tutorial](https://pytorch.org/tutorials/intermediate/ax_multiobjective_nas_tutorial.html)**.\n\n### Results\n\nThe final results from the NAS optimization performed in the tutorial can be seen in the tradeoff plot below. Here, each point corresponds to the result of a trial, with the color representing its iteration number, and the star indicating the reference point defined by the thresholds we imposed on the objectives. We see that our method was able to successfully explore the trade-offs between validation accuracy and number of parameters and found both large models with high validation accuracy as well as small models with lower validation accuracy. Depending on the performance requirements and model size constraints, the decision maker can now choose which model to use or analyze further.", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
\n\n### Visualizations\n\nAx provides a number of visualizations that make it possible to analyze and understand the results of an experiment. Here, we will focus on the performance of the Gaussian process models that model the unknown objectives, which are used to help us discover promising configurations faster. Ax makes it easy to better understand how accurate these models are and how they perform on unseen data via leave-one-out cross-validation. In the figures below, we see that the model fits look quite good - predictions are close to the actual outcomes, and predictive 95% confidence intervals cover the actual outcomes well. Additionally, we observe that the model size `(num_params)` metric is much easier to model than the validation accuracy `(val_acc)` metric.\n\n\n\n", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
+{"page_content": "\n\n\n
\n
\n
\n\n
\n
\n
\n
\n\n## Takeaways\n\n- We showed how to run a fully automated multi-objective Neural Architecture Search using Ax.\n\n- Using the Ax Scheduler, we were able to run the optimization automatically in a fully asynchronous fashion - this can be done locally (as done in the tutorial) or by deploying trials remotely to a cluster (simply by changing the TorchX scheduler configuration).\n\n- The state-of-the-art multi-objective Bayesian optimization algorithms available in Ax allowed us to efficiently explore the tradeoffs between validation accuracy and model size.\n\n## Advanced Functionality\n\nAx has a number of other advanced capabilities that we did not discuss in our tutorial. Among these are the following:\n\n### Early Stopping", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
+{"page_content": "### Early Stopping\n\nWhen evaluating a new candidate configuration, partial learning curves are typically available while the NN training job is running. We can use the information contained in the partial curves to identify under-performing trials to stop early in order to free up computational resources for more promising candidates. While not demonstrated in the above tutorial, Ax supports early stopping out-of-the-box - see our [early stopping tutorial](https://ax.dev/versions/latest/tutorials/early_stopping/early_stopping.html) for more details.\n\n### High-dimensional search spaces", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
+{"page_content": "In our tutorial, we used Bayesian optimization with a standard Gaussian process in order to keep the runtime low. However, these models typically scale to only about 10-20 tunable parameters. Our new SAASBO method ([paper](https://proceedings.mlr.press/v161/eriksson21a/eriksson21a.pdf), [Ax tutorial](https://ax.dev/tutorials/saasbo.html), [BoTorch tutorial](https://botorch.org/tutorials/saasbo)) is very sample-efficient and enables tuning hundreds of parameters. SAASBO can easily be enabled by passing `use_saasbo=True` to `choose_generation_strategy`.\n\n## Acknowledgements\n\nWe thank the TorchX team (in particular Kiuk Chung and Tristan Rice) for their help with integrating TorchX with Ax, and the Adaptive Experimentation team @ Meta for their contributions to Ax and BoTorch.\n\n## References", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
+{"page_content": "## References\n\n[D. Eriksson, P. Chuang, S. Daulton, M. Balandat. Optimizing model accuracy and latency using Bayesian multi-objective neural architecture search. Meta Research blog, July 2021.](https://research.facebook.com/blog/2021/07/optimizing-model-accuracy-and-latency-using-bayesian-multi-objective-neural-architecture-search/)", "metadata": {"source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch Adds New Ecosystem Projects for Encrypted AI and Quantum Computing, Expands PyTorch Hub'\nauthor: Team PyTorch\n---\n\nThe PyTorch ecosystem includes projects, tools, models and libraries from a broad community of researchers in academia and industry, application developers, and ML engineers. The goal of this ecosystem is to support, accelerate, and aid in your exploration with PyTorch and help you push the state of the art, no matter what field you are exploring. Similarly, we are expanding the recently launched PyTorch Hub to further help you discover and reproduce the latest research.\n\nIn this post, we\u2019ll highlight some of the projects that have been added to the PyTorch ecosystem this year and provide some context on the criteria we use to evaluate community projects. We\u2019ll also provide an update on the fast-growing PyTorch Hub and share details on our upcoming PyTorch Summer Hackathon.", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
+{"page_content": "\n

\n
\n\n## Recently added ecosystem projects\n\nFrom private AI to quantum computing, we\u2019ve seen the community continue to expand into new and interesting areas. The latest projects include:\n\n- [Advertorch](https://github.com/BorealisAI/advertorch): A Python toolbox for adversarial robustness research. The primary functionalities are implemented in PyTorch. Specifically, AdverTorch contains modules for generating adversarial perturbations and defending against adversarial examples, as well as scripts for adversarial training.\n\n- [botorch](https://botorch.org/): A modular and easily extensible interface for composing Bayesian optimization primitives, including probabilistic models, acquisition functions, and optimizers.\n\n- [Skorch](https://github.com/skorch-dev/skorch): A high-level library for PyTorch that provides full scikit-learn compatibility.", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
+{"page_content": "- [PyTorch Geometric](https://github.com/rusty1s/pytorch_geometric): A library for deep learning on irregular input data such as graphs, point clouds, and manifolds.\n\n- [PySyft](https://github.com/OpenMined/PySyft): A Python library for encrypted, privacy preserving deep learning.\n\n- [PennyLane](https://pennylane.ai/): A library for quantum ML, automatic differentiation, and optimization of hybrid quantum-classical computations.\n\n- [Flair](https://github.com/zalandoresearch/flair): A very simple framework for state-of-the-art natural language processing (NLP).\n\n### What makes a great project?\n\nWhen we review project submissions for the PyTorch ecosystem, we take into account a number of factors that we feel are important and that we would want in the projects we use ourselves. Some of these criteria include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
+{"page_content": "1. *Well-tested:* Users should be confident that ecosystem projects will work well with PyTorch, and include support for CI to ensure that testing is occurring on a continuous basis and the project can run on the latest version of PyTorch.\n2. *Clear utility:* Users should understand where each project fits within the PyTorch ecosystem and the value it brings.\n3. *Permissive licensing:* Users must be able to utilize ecosystem projects without licensing concerns. e.g. BSD-3, Apache-2 and MIT licenses\n4. *Easy onboarding:* Projects need to have support for binary installation options (pip/Conda), clear documentation and a rich set of tutorials (ideally built into Jupyter notebooks).\n5. *Ongoing maintenance:* Project authors need to be committed to supporting and maintaining their projects.\n6. *Community:* Projects should have (or be on track to building) an active, broad-based community.", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
+{"page_content": "If you would like to have your project included in the PyTorch ecosystem and featured on [pytorch.org/ecosystem](http://pytorch.org/ecosystem), please complete the form [here](https://pytorch.org/ecosystem/join). If you've previously submitted a project for consideration and haven't heard back, we promise to get back to you as soon as we can - we've received a lot of submissions!\n\n## PyTorch Hub for reproducible research | New models", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
+{"page_content": "Since [launching](https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/) the PyTorch Hub in beta, we\u2019ve received a lot of interest from the community including the contribution of many new models. Some of the latest include [U-Net for Brain MRI](https://pytorch.org/hub/mateuszbuda_brain-segmentation-pytorch_unet/) contributed by researchers at Duke University, [Single Shot Detection](https://pytorch.org/hub/nvidia_deeplearningexamples_ssd/) from NVIDIA and [Transformer-XL](https://pytorch.org/hub/huggingface_pytorch-pretrained-bert_transformerXL/) from HuggingFace.\n\nWe\u2019ve seen organic integration of the PyTorch Hub by folks like [paperswithcode](https://paperswithcode.com/), making it even easier for you to try out the state of the art in AI research. In addition, companies like [Seldon](https://github.com/axsaucedo/seldon-core/tree/pytorch_hub/examples/models/pytorchhub) provide production-level support for PyTorch Hub models on top of Kubernetes.", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
+{"page_content": "### What are the benefits of contributing a model in the PyTorch Hub?\n\n- *Compatibility:* PyTorch Hub models are prioritized first for testing by the TorchScript and Cloud TPU teams, and used as baselines for researchers across a number of fields.\n\n- *Visibility:* Models in the Hub will be promoted on [pytorch.org](http://pytorch.org/) as well as on [paperswithcode](https://paperswithcode.com/).\n\n- *Ease of testing and reproducibility:* Each model comes with code, clear preprocessing requirements, and methods/dependencies to run. There is also tight integration with [Google Colab](https://colab.research.google.com/github/pytorch/pytorch.github.io/blob/master/assets/hub/facebookresearch_WSL-Images_resnext.ipynb#scrollTo=LM_l7vXJvnDM), making it a true single click to get started.\n\n### PyTorch Hub contributions welcome!", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
+{"page_content": "We are actively looking to grow the PyTorch Hub and welcome contributions. You don\u2019t need to be an original paper author to contribute, and we\u2019d love to see the number of domains and fields broaden. So what types of contributions are we looking for?\n\n- Artifacts of a published or an arXiv paper (or something of a similar nature that serves a different audience \u2014 such as ULMFit) that a large audience would need.\n\n AND\n\n- Reproduces the published results (or better)\n\nOverall these models are aimed at researchers either trying to reproduce a baseline, or trying to build downstream research on top of the model (such as feature-extraction or fine-tuning) as well as researchers looking for a demo of the paper for subjective evaluation. Please keep this audience in mind when contributing.\n\nIf you are short on inspiration or would just like to find out what the SOTA is an any given field or domain, checkout the Paperswithcode [state-of-the-art gallery](https://paperswithcode.com/sota).\n\n## PyTorch Summer Hackathon", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
+{"page_content": "We\u2019ll be hosting the first PyTorch Summer Hackathon next month. We invite you to apply to participate in the in-person hackathon on August 8th to 9th at Facebook's Menlo Park campus. We'll be bringing the community together to work on innovative ML projects that can solve a broad range of complex challenges.\n\nApplications will be reviewed and accepted on a rolling basis until spaces are filled. For those who cannot join this Hackathon in person, we\u2019ll be following up soon with other ways to participate.\n\nPlease visit [this link to apply](https://www.eventbrite.com/e/pytorch-summer-hackathon-in-menlo-park-registration-63756668913).\n\nThank you for being part of the PyTorch community!\n\n-Team PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models'\nauthor: Chaoyang He, Shen Li, Mahdi Soltanolkotabi, and Salman Avestimehr\nfeatured-img: 'assets/images/pipetransformer_overview.png'\n---\n\nIn this blog post, we describe the first peer-reviewed research paper that explores accelerating the hybrid of PyTorch DDP (`torch.nn.parallel.DistributedDataParallel`) [1] and Pipeline (`torch.distributed.pipeline`) - [PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models](http://proceedings.mlr.press/v139/he21a.html) (Transformers such as BERT [2] and ViT [3]), published at ICML 2021.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "PipeTransformer leverages automated elastic pipelining for efficient distributed training of Transformer models. In PipeTransformer, we designed an adaptive on-the-fly freeze algorithm that can identify and freeze some layers gradually during training and an elastic pipelining system that can dynamically allocate resources to train the remaining active layers. More specifically, PipeTransformer automatically excludes frozen layers from the pipeline, packs active layers into fewer GPUs, and forks more replicas to increase data-parallel width. We evaluate PipeTransformer using Vision Transformer (ViT) on ImageNet and BERT on SQuAD and GLUE datasets. Our results show that compared to the state-of-the-art baseline, PipeTransformer attains up to 2.83-fold speedup without losing accuracy. We also provide various performance analyses for a more comprehensive understanding of our algorithmic and system-wise design.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "Next, we will introduce the background, motivation, our idea, design, and how we implement the algorithm and system with PyTorch Distributed APIs.\n\n* Paper: [http://proceedings.mlr.press/v139/he21a.html](http://proceedings.mlr.press/v139/he21a.html)\n* Source Code: [https://DistML.ai](https://distml.ai).\n* Slides: [https://docs.google.com/presentation/d/1t6HWL33KIQo2as0nSHeBpXYtTBcy0nXCoLiKd0EashY/edit?usp=sharing](https://docs.google.com/presentation/d/1t6HWL33KIQo2as0nSHeBpXYtTBcy0nXCoLiKd0EashY/edit?usp=sharing)\n\n# Introduction\n\n
\n
\nFigure 1: the Parameter Number of Transformer Models Increases Dramatically.\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "Large Transformer models [4][5] have powered accuracy breakthroughs in both natural language processing and computer vision. GPT-3 [4] hit a new record high accuracy for nearly all NLP tasks. Vision Transformer (ViT) [3] also achieved 89\\% top-1 accuracy in ImageNet, outperforming state-of-the-art convolutional networks ResNet-152 and EfficientNet. To tackle the growth in model sizes, researchers have proposed various distributed training techniques, including parameter servers [6][7][8], pipeline parallelism [9][10][11][12], intra-layer parallelism [13][14][15], and zero redundancy data-parallel [16].\n\n\nExisting distributed training solutions, however, only study scenarios where all model weights are required to be optimized throughout the training (i.e., computation and communication overhead remains relatively static over different iterations). Recent works on progressive training suggest that parameters in neural networks can be trained dynamically:", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "* Freeze Training: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability. NeurIPS 2017\n* Efficient Training of BERT by Progressively Stacking. ICML 2019\n* Accelerating Training of Transformer-Based Language Models with Progressive Layer Dropping. NeurIPS 2020.\n* On the Transformer Growth for Progressive BERT Training. NACCL 2021\n\n\n\n
\n
\n
\nFigure 2. Interpretable Freeze Training: DNNs converge bottom-up (Results on CIFAR10 using ResNet). Each pane shows layer-by-layer similarity using SVCCA [17][18]", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "For example, in freeze training [17][18], neural networks usually converge from the bottom-up (i.e., not all layers need to be trained all the way through training). Figure 2 shows an example of how weights gradually stabilize during training in this approach. This observation motivates us to utilize freeze training for distributed training of Transformer models to accelerate training by dynamically allocating resources to focus on a shrinking set of active layers. Such a layer freezing strategy is especially pertinent to pipeline parallelism, as excluding consecutive bottom layers from the pipeline can reduce computation, memory, and communication overhead.\n\n\n
\n
\nFigure 3. The process of PipeTransformer\u2019s automated and elastic pipelining to accelerate distributed training of Transformer models\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "We propose PipeTransformer, an elastic pipelining training acceleration framework that automatically reacts to frozen layers by dynamically transforming the scope of the pipelined model and the number of pipeline replicas. To the best of our knowledge, this is the first paper that studies layer freezing in the context of both pipeline and data-parallel training. Figure 3 demonstrates the benefits of such a combination. First, by excluding frozen layers from the pipeline, the same model can be packed into fewer GPUs, leading to both fewer cross-GPU communications and smaller pipeline bubbles. Second, after packing the model into fewer GPUs, the same cluster can accommodate more pipeline replicas, increasing the width of data parallelism. More importantly, the speedups acquired from these two benefits are multiplicative rather than additive, further accelerating the training.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "The design of PipeTransformer faces four major challenges. First, the freeze algorithm must make on-the-fly and adaptive freezing decisions; however, existing work [17][18] only provides a posterior analysis tool. Second, the efficiency of pipeline re-partitioning results is influenced by multiple factors, including partition granularity, cross-partition activation size, and the chunking (the number of micro-batches) in mini-batches, which require reasoning and searching in a large solution space. Third, to dynamically introduce additional pipeline replicas, PipeTransformer must overcome the static nature of collective communications and avoid potentially complex cross-process messaging protocols when onboarding new processes (one pipeline is handled by one process). Finally, caching can save time for repeated forward propagation of frozen layers, but it must be shared between existing pipelines and newly added ones, as the system cannot afford to create and warm up a dedicated cache for each replica.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
{"page_content": "\n
\n
\nFigure 4: An Animation to Show the Dynamics of PipeTransformer\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "As shown in the animation (Figure 4), PipeTransformer is designed with four core building blocks to address the aforementioned challenges. First, we design a tunable and adaptive algorithm to generate signals that guide the selection of layers to freeze over different iterations (Freeze Algorithm). Once triggered by these signals, our elastic pipelining module (AutoPipe), then packs the remaining active layers into fewer GPUs by taking both activation sizes and variances of workloads across heterogeneous", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "partitions (frozen layers and active layers) into account. It then splits a mini-batch into an optimal number of micro-batches based on prior profiling results for different pipeline lengths. Our next module, AutoDP, spawns additional pipeline replicas to occupy freed-up GPUs and maintains hierarchical communication process groups to attain dynamic membership for collective communications. Our final module, AutoCache, efficiently shares activations across existing and new data-parallel processes and", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "automatically replaces stale caches during transitions.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Overall, PipeTransformer combines the Freeze Algorithm, AutoPipe, AutoDP, and AutoCache modules to provide a significant training speedup.\nWe evaluate PipeTransformer using Vision Transformer (ViT) on ImageNet and BERT on GLUE and SQuAD datasets. Our results show that PipeTransformer attains up to 2.83-fold speedup without losing accuracy. We also provide various performance analyses for a more comprehensive understanding of our algorithmic and system-wise design.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Finally, we have also developed open-source flexible APIs for PipeTransformer, which offer a clean separation among the freeze algorithm, model definitions, and training accelerations, allowing for transferability to other algorithms that require similar freezing strategies.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "# Overall Design\n\nSuppose we aim to train a massive model in a distributed training system where the hybrid of pipelined model parallelism and data parallelism is used to target scenarios where either the memory of a single GPU device cannot hold the model, or if loaded, the batch size is small enough to avoid running out of memory. More specifically, we define our settings as follows:", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Training task and model definition. We train Transformer models (e.g., Vision Transformer, BERT on large-scale image or text datasets. The Transformer model
has
layers, in which the
th layer is composed of a forward computation function
and a corresponding set of parameters.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Training infrastructure. Assume the training infrastructure contains a GPU cluster that has
GPU servers (i.e. nodes). Each node has
GPUs. Our cluster is homogeneous, meaning that each GPU and server have the same hardware configuration. Each GPU's memory capacity is
. Servers are", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "connected by a high bandwidth network interface such as InfiniBand interconnect.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Pipeline parallelism. In each machine, we load a model
into a pipeline
which has
partitions (
also represents the pipeline length). The
th", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "partition
consists of consecutive layers. We assume each partition is handled by a single GPU device.
, meaning that we can build multiple pipelines for multiple model replicas in a single machine. We assume all GPU devices in a pipeline belonging to the same machine. Our pipeline is a synchronous pipeline, which does not involve stale gradients, and the", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "number of micro-batches is
. In the Linux OS, each pipeline is handled by a single process. We refer the reader to GPipe [10] for more details.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Data parallelism. DDP is a cross-machine distributed data-parallel process group within
parallel workers. Each worker is a pipeline replica (a single process). The
th worker's index (ID) is rank
. For any two pipelines in DDP, they can belong to either the same GPU server or different GPU", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "servers, and they can exchange gradients with the AllReduce algorithm.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Under these settings, our goal is to accelerate training by leveraging freeze training, which does not require all layers to be trained throughout the duration of the training. Additionally, it may help save computation, communication, memory cost, and potentially prevent overfitting by consecutively freezing layers. However, these benefits can only be achieved by overcoming the four challenges of designing an adaptive freezing algorithm, dynamical pipeline re-partitioning, efficient resource reallocation,", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "and cross-process caching, as discussed in the introduction.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\nFigure 5. Overview of PipeTransformer Training System\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "PipeTransformer co-designs an on-the-fly freeze algorithm and an automated elastic pipelining training system that can dynamically transform the scope of the pipelined model and the number of pipeline replicas. The overall system architecture is illustrated in Figure 5. To support PipeTransformer\u2019s elastic pipelining, we maintain a customized version of PyTorch Pipeline. For data parallelism, we use PyTorch DDP as a baseline. Other libraries are standard mechanisms of an operating system", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "(e.g.,multi-processing) and thus avoid specialized software or hardware customization requirements. To ensure the generality of our framework, we have decoupled the training system into four core components: freeze algorithm, AutoPipe, AutoDP, and AutoCache. The freeze algorithm (grey) samples indicators from the training loop and makes layer-wise freezing decisions, which will be shared with AutoPipe", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "(green). AutoPipe is an elastic pipeline module that speeds up training by excluding frozen layers from the pipeline and packing the active layers into fewer GPUs (pink), leading to both fewer cross-GPU communications and smaller pipeline bubbles. Subsequently, AutoPipe passes pipeline length information to AutoDP (purple), which then spawns more pipeline replicas to increase data-parallel width, if possible. The illustration also includes an example in which AutoDP", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "introduces a new replica (purple). AutoCache (orange edges) is a cross-pipeline caching module, as illustrated by connections between pipelines. The source code architecture is aligned with Figure 5 for readability and generality.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "# Implementation Using PyTorch APIs\n\nAs can be seen from Figure 5, PipeTransformers contain four components: Freeze Algorithm, AutoPipe, AutoDP, and AutoCache. Among them, AutoPipe and AutoDP relies on PyTorch DDP (`torch.nn.parallel.DistributedDataParallel`) [1] and Pipeline (`torch.distributed.pipeline`), respectively. In this blog, we only highlight the key implementation details of AutoPipe and AutoDP. For details of Freeze Algorithm and AutoCache, please refer to our paper.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "AutoPipe: Elastic Pipelining\n\nAutoPipe can accelerate training by excluding frozen layers from the pipeline and packing the active layers into fewer GPUs. This section elaborates on the key components of AutoPipe that dynamically 1) partition pipelines, 2) minimize the number of pipeline devices, and 3) optimize mini-batch chunk size accordingly.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Basic Usage of PyTorch Pipeline\n\nBefore diving into details of AutoPipe, let us warm up the basic usage of PyTorch Pipeline (`torch.distributed.pipeline.sync.Pipe`, see [this tutorial](https://pytorch.org/docs/stable/pipeline.html)). More specially, we present a simple example to understand the design of Pipeline in practice:\n\n```python\n# Step 1: build a model including two linear layers\nfc1 = nn.Linear(16, 8).cuda(0)\nfc2 = nn.Linear(8, 4).cuda(1)", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "# Step 2: wrap the two layers with nn.Sequential\nmodel = nn.Sequential(fc1, fc2)\n\n# Step 3: build Pipe (torch.distributed.pipeline.sync.Pipe)\nmodel = Pipe(model, chunks=8)\n\n# do training/inference\ninput = torch.rand(16, 16).cuda(0)\noutput_rref = model(input)", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "In this basic example, we can see that before initializing `Pipe`, we need to partition the model `nn.Sequential` into multiple GPU devices and set optimal chunk number (`chunks`). Balancing computation time across partitions is critical to pipeline training speed, as skewed workload distributions across stages can lead to stragglers and forcing devices with lighter workloads to wait. The chunk number may also have a non-trivial influence on the throughput of the pipeline.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Balanced Pipeline Partitioning\n\nIn dynamic training system such as PipeTransformer, maintaining optimally balanced partitions in terms of parameter numbers does not guarantee the fastest training speed because other factors also play a crucial role:\n\n\n
\n
\nFigure 6. The partition boundary is in the middle of a skip connection\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "1. Cross-partition communication overhead. Placing a partition boundary in the middle of a skip connection leads to additional communications since tensors in the skip connection must now be copied to a different GPU. For example, with BERT partitions in Figure 6, partition
must take intermediate outputs from both partition
and partition
. In contrast, if the boundary is placed after the addition layer, the communication overhead between partition
and
is visibly smaller. Our measurements show that having cross-device communication is more expensive than having slightly imbalanced partitions (see the Appendix in our paper). Therefore, we do", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "not consider breaking skip connections (highlighted separately as an entire attention layer and MLP layer in green color at line 7 in Algorithm 1.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "2. Frozen layer memory footprint. During training, AutoPipe must recompute partition boundaries several times to balance two distinct types of layers: frozen layers and active layers. The frozen layer's memory cost is a fraction of that inactive layer, given that the frozen layer does not need backward activation maps, optimizer states, and gradients. Instead of launching intrusive profilers to obtain thorough metrics on memory and computational cost, we define a tunable cost factor
to estimate the memory footprint ratio of a frozen layer over the same active layer. Based on empirical measurements in our experimental hardware, we set it to
.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Based on the above two considerations, AutoPipe balances pipeline partitions based on parameter sizes. More specifically, AutoPipe uses a greedy algorithm to allocate all frozen and active layers to evenly distribute partitioned sublayers into
GPU devices. Pseudocode is described as the `load\\_balance()` function in Algorithm 1. The frozen layers are extracted from the original model and kept in a separate model instance
in the first device of a pipeline.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Note that the partition algorithm employed in this paper is not the only option; PipeTransformer is modularized to work with any alternatives.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Pipeline Compression", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Pipeline compression helps to free up GPUs to accommodate more pipeline replicas and reduce the number of cross-device communications between partitions. To determine the timing of compression, we can estimate the memory cost of the largest partition after compression, and then compare it with that of the largest partition of a pipeline at timestep
. To avoid extensive memory profiling, the compression algorithm uses the parameter size as", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "a proxy for the training memory footprint. Based on this simplification, the criterion of pipeline compression is as follows:", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Once the freeze notification is received, AutoPipe will always attempt to divide the pipeline length
by 2 (e.g., from 8 to 4, then 2). By using
as the input, the compression algorithm can verify if the result satisfies the criterion in Equation (1). Pseudocode is shown in lines 25-33 in Algorithm 1. Note that this compression makes the acceleration ratio", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "exponentially increase during training, meaning that if a GPU server has a larger number of GPUs (e.g., more than 8), the acceleration ratio will be further amplified.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Figure 7. Pipeline Bubble:
, and
denote forward, backward, and the optimizer update of micro-batch
on device
, respectively. The total bubble size in each iteration is
times per micro-batch forward and backward cost.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Additionally, such a technique can also speed up training by shrinking the size of pipeline bubbles. To explain bubble sizes in a pipeline, Figure 7 depicts how 4 micro-batches run through a 4-device pipeline
. In general, the total bubble size is
times per micro-batch forward and backward cost. Therefore, it is clear that shorter pipelines have smaller bubble sizes.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Dynamic Number of Micro-Batches", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Prior pipeline parallel systems use a fixed number of micro-batches per mini-batch (
). GPipe suggests
, where
is the number of partitions (pipeline length). However, given that PipeTransformer dynamically configures
, we find", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "it to be sub-optimal to maintain a static
during training. Moreover, when integrated with DDP, the value of
also has an impact on the efficiency of DDP gradient synchronizations. Since DDP must wait for the last micro-batch to finish its backward computation on a parameter before launching its gradient synchronization, finer micro-batches lead to a smaller overlap between", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "computation and communication. Hence, instead of using a static value, PipeTransformer searches for optimal
on the fly in the hybrid of DDP environment by enumerating
values ranging from
to
. For a specific training environment, the", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "profiling needs only to be done once (see Algorithm 1 line 35).", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "For the complete source code, please refer to `https://github.com/Distributed-AI/PipeTransformer/blob/master/pipe_transformer/pipe/auto_pipe.py`.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "AutoDP: Spawning More Pipeline Replicas\nAs AutoPipe compresses the same pipeline into fewer GPUs, AutoDP can automatically spawn new pipeline replicas to increase data-parallel width.\n\nDespite the conceptual simplicity, subtle dependencies on communications and states require careful design. The challenges are threefold:\n\n1. DDP Communication: Collective communications in PyTorch DDP requires static membership, which prevents new pipelines from connecting with existing ones;", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "2. State Synchronization: newly activated processes must be consistent with existing pipelines in the training progress (e.g., epoch number and learning rate), weights and optimizer states, the boundary of frozen layers, and pipeline GPU range;\n\n3. Dataset Redistribution: the dataset should be re-balanced to match a dynamic number of pipelines. This not only avoids stragglers but also ensures that gradients from all DDP processes are equally weighted.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\nFigure 8. AutoDP: handling dynamical data-parallel with messaging between double process groups (Process 0-7 belong to machine 0, while process 8-15 belong to machine 1)\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "To tackle these challenges, we create double communication process groups for DDP. As in the example shown in Figure 8, the message process group (purple) is responsible for light-weight control messages and covers all processes, while the active training process group (yellow) only contains active processes and serves as a vehicle for heavy-weight tensor communications during training. The message group remains static, whereas the training group is dismantled and reconstructed to match active processes.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "In T0, only processes 0 and 8 are active. During the transition to T1, process 0 activates processes 1 and 9 (newly added pipeline replicas) and synchronizes necessary information mentioned above using the message group. The four active processes then form a new training group, allowing static collective communications adaptive to dynamic memberships.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "To redistribute the dataset, we implement a variant of DistributedSampler that can seamlessly adjust data samples to match the number of active pipeline replicas.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "The above design also naturally helps to reduce DDP communication overhead. More specifically, when transitioning from T0 to T1, processes 0 and 1 destroy the existing DDP instances, and active processes construct a new DDP training group using a cached pipelined model (AutoPipe stores frozen model and cached model separately).\n\nWe use the following APIs to implement the design above.\n\n```python\nimport torch.distributed as dist\nfrom torch.nn.parallel import DistributedDataParallel as DDP", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "# initialize the process group (this must be called in the initialization of PyTorch DDP)\ndist.init_process_group(init_method='tcp://' + str(self.config.master_addr) + ':' +\nstr(self.config.master_port), backend=Backend.GLOO, rank=self.global_rank, world_size=self.world_size)\n...\n\n# create active process group (yellow color)\nself.active_process_group = dist.new_group(ranks=self.active_ranks, backend=Backend.NCCL, timeout=timedelta(days=365))\n...", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "# create message process group (yellow color)\nself.comm_broadcast_group = dist.new_group(ranks=[i for i in range(self.world_size)], backend=Backend.GLOO, timeout=timedelta(days=365))\n...", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "# create DDP-enabled model when the number of data-parallel workers is changed. Note:\n# 1. The process group to be used for distributed data all-reduction.\nIf None, the default process group, which is created by torch.distributed.init_process_group, will be used.\nIn our case, we set it as self.active_process_group\n# 2. device_ids should be set when the pipeline length = 1 (the model resides on a single CUDA device).", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "self.pipe_len = gpu_num_per_process\nif gpu_num_per_process > 1:\n model = DDP(model, process_group=self.active_process_group, find_unused_parameters=True)\nelse:\n model = DDP(model, device_ids=[self.local_rank], process_group=self.active_process_group, find_unused_parameters=True)", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "# to broadcast message among processes, we use dist.broadcast_object_list\ndef dist_broadcast(object_list, src, group):\n \"\"\"Broadcasts a given object to all parties.\"\"\"\n dist.broadcast_object_list(object_list, src, group=group)\n return object_list\n```\nFor the complete source code, please refer to `https://github.com/Distributed-AI/PipeTransformer/blob/master/pipe_transformer/dp/auto_dp.py`.\n\n# Experiments", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "This section first summarizes experiment setups and then evaluates PipeTransformer using computer vision and natural language processing tasks.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Hardware. Experiments were conducted on 2 identical machines connected by InfiniBand CX353A (
GB/s), where each machine is equipped with 8 NVIDIA Quadro RTX 5000 (16GB GPU memory). GPU-to-GPU bandwidth within a machine (PCI 3.0, 16 lanes) is
GB/s.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Implementation. We used PyTorch Pipe as a building block. The BERT model definition, configuration, and related tokenizer are from HuggingFace 3.5.0. We implemented Vision Transformer using PyTorch by following its TensorFlow implementation. More details can be found in our source code.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Models and Datasets. Experiments employ two representative Transformers in CV and NLP: Vision Transformer (ViT) and BERT. ViT was run on an image classification task, initialized with pre-trained weights on ImageNet21K and fine-tuned on ImageNet and CIFAR-100. BERT was run on two tasks, text classification on the SST-2 dataset from the General Language Understanding Evaluation (GLUE) benchmark, and question answering on the SQuAD v1.1 Dataset (Stanford Question Answering), which is a", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "collection of 100k crowdsourced question/answer pairs.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Training Schemes. Given that large models normally would require thousands of GPU-days {\\emph{e.g.}, GPT-3) if trained from scratch, fine-tuning downstream tasks using pre-trained models has become a trend in CV and NLP communities. Moreover, PipeTransformer is a complex training system that involves multiple core components. Thus, for the first version of PipeTransformer system development and algorithmic research, it is not cost-efficient to develop and evaluate from scratch using", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "large-scale pre-training. Therefore, the experiments presented in this section focuses on pre-trained models. Note that since the model architectures in pre-training and fine-tuning are the same, PipeTransformer can serve both. We discussed pre-training results in the Appendix.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Baseline. Experiments in this section compare PipeTransformer to the state-of-the-art framework, a hybrid scheme of PyTorch Pipeline (PyTorch\u2019s implementation of GPipe) and PyTorch DDP. Since this is the first paper that studies accelerating distributed training by freezing layers, there are no perfectly aligned counterpart solutions yet.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Hyper-parameters. Experiments use ViT-B/16 (12 transformer layers,
input patch size) for ImageNet and CIFAR-100, BERT-large-uncased (24 layers) for SQuAD 1.1, and BERT-base-uncased (12 layers) for SST-2. With PipeTransformer, ViT and BERT training can set the per-pipeline batch size to around 400 and 64, respectively. Other hyperparameters (e.g., epoch, learning rate) for all experiments are presented in", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Appendix.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Overall Training Acceleration\n\n
\n
\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "We summarize the overall experimental results in the table above. Note that the speedup we report is based on a conservative
value that can obtain comparable or even higher accuracy. A more aggressive
(
,
) can obtain a higher speedup but may lead to a slight loss in accuracy. Note that the model size of BERT (24 layers) is larger than ViT-B/16 (12 layers), thus it takes more time for communication.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Performance Analysis", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Speedup Breakdown\n\nThis section presents evaluation results and analyzes the performance of different components in \\autopipe. More experimental results can be found in the Appendix.\n\n\n
\n
\nFigure 9. Speedup Breakdown (ViT on ImageNet)\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "To understand the efficacy of all four components and their impacts on training speed, we experimented with different combinations and used their training sample throughput (samples/second) and speedup ratio as metrics. Results are illustrated in Figure 9. Key takeaways from these experimental results are:", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "1. the main speedup is the result of elastic pipelining which is achieved through the joint use of AutoPipe and AutoDP;\n2. AutoCache's contribution is amplified by AutoDP;\n3. freeze training alone without system-wise adjustment even downgrades the training speed.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Tuning
in Freezing Algorithm\n\n\n
\n
\nFigure 10. Tuning
in Freezing Algorithm\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "We ran experiments to show how the
in the freeze algorithms influences training speed. The result clearly demonstrates that a larger
(excessive freeze) leads to a greater speedup but suffers from a slight performance degradation. In the case shown in Figure 10, where
, freeze training", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "outperforms normal training and obtains a
-fold speedup. We provide more results in the Appendix.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Optimal Chunks in the elastic pipeline\n\n\n
\n
\nFigure 11. Optimal chunk number in the elastic pipeline\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "We profiled the optimal number of micro-batches
for different pipeline lengths
. Results are summarized in Figure 11. As we can see, different
values lead to different optimal
, and the throughput gaps across different M values are large (as", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "shown when
), which confirms the necessity of an anterior profiler in elastic pipelining.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "Understanding the Timing of Caching\n\n\n
\n
\nFigure 12. the timing of caching\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "To evaluate AutoCache, we compared the sample throughput of training that activates AutoCache from epoch
(blue) with the training job without AutoCache (red). Figure 12 shows that enabling caching too early can slow down training, as caching can be more expensive than the forward propagation on a small number of frozen layers. After more layers are frozen, caching activations clearly outperform the corresponding forward propagation. As a", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "result, AutoCache uses a profiler to determine the proper timing to enable caching. In our system, for ViT (12 layers), caching starts from 3 frozen layers, while for BERT (24 layers), caching starts from 5 frozen layers.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "For more detailed experimental analysis, please refer to our paper.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "# Summarization", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "This blog introduces PipeTransformer, a holistic solution that combines elastic pipeline-parallel and data-parallel for distributed training using PyTorch Distributed APIs. More specifically, PipeTransformer incrementally freezes layers in the pipeline, packs remaining active layers into fewer GPUs, and forks more pipeline replicas to increase the data-parallel width. Evaluations on ViT and BERT models show that compared to the state-of-the-art baseline, PipeTransformer attains up to 2.83\u00d7 speedups without", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "accuracy loss.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "# Reference\n\n[1] Li, S., Zhao, Y., Varma, R., Salpekar, O., Noordhuis, P., Li,T., Paszke, A., Smith, J., Vaughan, B., Damania, P., et al. Pytorch Distributed: Experiences on Accelerating Dataparallel Training. Proceedings of the VLDB Endowment,13(12), 2020\n\n[2] Devlin, J., Chang, M. W., Lee, K., and Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT, 2019", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "[3] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al. An image is Worth 16x16 words: Transformers for Image Recognition at Scale.\n\n[4] Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language Models are Few-shot Learners.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "[5] Lepikhin, D., Lee, H., Xu, Y., Chen, D., Firat, O., Huang, Y., Krikun, M., Shazeer, N., and Chen, Z. Gshard: Scaling Giant Models with Conditional Computation and Automatic Sharding.\n\n[6] Li, M., Andersen, D. G., Park, J. W., Smola, A. J., Ahmed, A., Josifovski, V., Long, J., Shekita, E. J., and Su, B. Y. Scaling Distributed Machine Learning with the Parameter Server. In 11th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 14), pp. 583\u2013598, 2014.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "[7] Jiang, Y., Zhu, Y., Lan, C., Yi, B., Cui, Y., and Guo, C. A Unified Architecture for Accelerating Distributed DNN Training in Heterogeneous GPU/CPU Clusters. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20), pp. 463\u2013479. USENIX Association, November 2020. ISBN 978-1-939133-19- 9.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "[8] Kim, S., Yu, G. I., Park, H., Cho, S., Jeong, E., Ha, H., Lee, S., Jeong, J. S., and Chun, B. G. Parallax: Sparsity-aware Data Parallel Training of Deep Neural Networks. In Proceedings of the Fourteenth EuroSys Conference 2019, pp. 1\u201315, 2019.\n\n[9] Kim, C., Lee, H., Jeong, M., Baek, W., Yoon, B., Kim, I., Lim, S., and Kim, S. TorchGPipe: On-the-fly Pipeline Parallelism for Training Giant Models.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "[10] Huang, Y., Cheng, Y., Bapna, A., Firat, O., Chen, M. X., Chen, D., Lee, H., Ngiam, J., Le, Q. V., Wu, Y., et al. Gpipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "[11] Park, J. H., Yun, G., Yi, C. M., Nguyen, N. T., Lee, S., Choi, J., Noh, S. H., and ri Choi, Y. Hetpipe: Enabling Large DNN Training on (whimpy) Heterogeneous GPU Clusters through Integration of Pipelined Model Parallelism and Data Parallelism. In 2020 USENIX Annual Technical Conference (USENIX ATC 20), pp. 307\u2013321. USENIX Association, July 2020. ISBN 978-1-939133- 14-4.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "[12] Narayanan, D., Harlap, A., Phanishayee, A., Seshadri, V., Devanur, N. R., Ganger, G. R., Gibbons, P. B., and Zaharia, M. Pipedream: Generalized Pipeline Parallelism for DNN Training. In Proceedings of the 27th ACM Symposium on Operating Systems Principles, SOSP \u201919, pp. 1\u201315, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450368735. doi: 10.1145/3341301.3359646.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "[13] Lepikhin, D., Lee, H., Xu, Y., Chen, D., Firat, O., Huang, Y., Krikun, M., Shazeer, N., and Chen, Z. Gshard: Scaling Giant Models with Conditional Computation and Automatic Sharding.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "[14] Shazeer, N., Cheng, Y., Parmar, N., Tran, D., Vaswani, A., Koanantakool, P., Hawkins, P., Lee, H., Hong, M., Young, C., Sepassi, R., and Hechtman, B. Mesh-Tensorflow: Deep Learning for Supercomputers. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 31, pp. 10414\u201310423. Curran Associates, Inc., 2018.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "[15] Shoeybi, M., Patwary, M., Puri, R., LeGresley, P., Casper, J., and Catanzaro, B. Megatron-LM: Training Multi-billion Parameter Language Models using Model Parallelism.\n\n[16] Rajbhandari, S., Rasley, J., Ruwase, O., and He, Y. ZERO: Memory Optimization towards Training a Trillion Parameter Models.\n\n[17] Raghu, M., Gilmer, J., Yosinski, J., and Sohl Dickstein, J. Svcca: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability. In NIPS, 2017.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "As shown in the animation (Figure 4), PipeTransformer is designed with four core building blocks to address the aforementioned challenges. First, we design a tunable and adaptive algorithm to generate signals that guide the selection of layers to freeze over different iterations (Freeze Algorithm). Once triggered by these signals, our elastic pipelining module (AutoPipe), then packs the remaining active layers into fewer GPUs by taking both activation sizes and variances of workloads across heterogeneous partitions (frozen layers and active layers) into account. It then splits a mini-batch into an optimal number of micro-batches based on prior profiling results for different pipeline lengths. Our next module, AutoDP, spawns additional pipeline replicas to occupy freed-up GPUs and maintains hierarchical communication process groups to attain dynamic membership for collective communications. Our final module, AutoCache, efficiently shares activations across existing and new data-parallel processes and automatically replaces stale caches during transitions.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "Overall, PipeTransformer combines the Freeze Algorithm, AutoPipe, AutoDP, and AutoCache modules to provide a significant training speedup.\nWe evaluate PipeTransformer using Vision Transformer (ViT) on ImageNet and BERT on GLUE and SQuAD datasets. Our results show that PipeTransformer attains up to 2.83-fold speedup without losing accuracy. We also provide various performance analyses for a more comprehensive understanding of our algorithmic and system-wise design.\nFinally, we have also developed open-source flexible APIs for PipeTransformer, which offer a clean separation among the freeze algorithm, model definitions, and training accelerations, allowing for transferability to other algorithms that require similar freezing strategies.\n\n# Overall Design", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "# Overall Design\n\nSuppose we aim to train a massive model in a distributed training system where the hybrid of pipelined model parallelism and data parallelism is used to target scenarios where either the memory of a single GPU device cannot hold the model, or if loaded, the batch size is small enough to avoid running out of memory. More specifically, we define our settings as follows:\n\nTraining task and model definition. We train Transformer models (e.g., Vision Transformer, BERT on large-scale image or text datasets. The Transformer model
has
layers, in which the
th layer is composed of a forward computation function
and a corresponding set of parameters.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "Training infrastructure. Assume the training infrastructure contains a GPU cluster that has
GPU servers (i.e. nodes). Each node has
GPUs. Our cluster is homogeneous, meaning that each GPU and server have the same hardware configuration. Each GPU's memory capacity is
. Servers are connected by a high bandwidth network interface such as InfiniBand interconnect.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "Pipeline parallelism. In each machine, we load a model
into a pipeline
which has
partitions (
also represents the pipeline length). The
th partition
consists of consecutive layers. We assume each partition is handled by a single GPU device.
, meaning that we can build multiple pipelines for multiple model replicas in a single machine. We assume all GPU devices in a pipeline belonging to the same machine. Our pipeline is a synchronous pipeline, which does not involve stale gradients, and the number of micro-batches is
. In the Linux OS, each pipeline is handled by a single process. We refer the reader to GPipe [10] for more details.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "Data parallelism. DDP is a cross-machine distributed data-parallel process group within
parallel workers. Each worker is a pipeline replica (a single process). The
th worker's index (ID) is rank
. For any two pipelines in DDP, they can belong to either the same GPU server or different GPU servers, and they can exchange gradients with the AllReduce algorithm.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "Under these settings, our goal is to accelerate training by leveraging freeze training, which does not require all layers to be trained throughout the duration of the training. Additionally, it may help save computation, communication, memory cost, and potentially prevent overfitting by consecutively freezing layers. However, these benefits can only be achieved by overcoming the four challenges of designing an adaptive freezing algorithm, dynamical pipeline re-partitioning, efficient resource reallocation, and cross-process caching, as discussed in the introduction.\n\n\n\n
\n
\nFigure 5. Overview of PipeTransformer Training System\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "PipeTransformer co-designs an on-the-fly freeze algorithm and an automated elastic pipelining training system that can dynamically transform the scope of the pipelined model and the number of pipeline replicas. The overall system architecture is illustrated in Figure 5. To support PipeTransformer\u2019s elastic pipelining, we maintain a customized version of PyTorch Pipeline. For data parallelism, we use PyTorch DDP as a baseline. Other libraries are standard mechanisms of an operating system (e.g.,multi-processing) and thus avoid specialized software or hardware customization requirements. To ensure the generality of our framework, we have decoupled the training system into four core components: freeze algorithm, AutoPipe, AutoDP, and AutoCache. The freeze algorithm (grey) samples indicators from the training loop and makes layer-wise freezing decisions, which will be shared with AutoPipe (green). AutoPipe is an elastic pipeline module that speeds up training by excluding frozen layers from the pipeline and packing the active layers into fewer GPUs (pink), leading to both fewer cross-GPU communications and smaller pipeline bubbles. Subsequently, AutoPipe passes pipeline length information to AutoDP (purple), which then spawns more pipeline replicas to increase data-parallel width, if possible. The illustration also includes an example in which AutoDP introduces a new replica (purple). AutoCache (orange edges) is a cross-pipeline caching module, as illustrated by connections between pipelines. The source code architecture is aligned with Figure 5 for readability and generality.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "# Implementation Using PyTorch APIs\n\nAs can be seen from Figure 5, PipeTransformers contain four components: Freeze Algorithm, AutoPipe, AutoDP, and AutoCache. Among them, AutoPipe and AutoDP relies on PyTorch DDP (`torch.nn.parallel.DistributedDataParallel`) [1] and Pipeline (`torch.distributed.pipeline`), respectively. In this blog, we only highlight the key implementation details of AutoPipe and AutoDP. For details of Freeze Algorithm and AutoCache, please refer to our paper.\n\n## AutoPipe: Elastic Pipelining\n\nAutoPipe can accelerate training by excluding frozen layers from the pipeline and packing the active layers into fewer GPUs. This section elaborates on the key components of AutoPipe that dynamically 1) partition pipelines, 2) minimize the number of pipeline devices, and 3) optimize mini-batch chunk size accordingly.\n\n### Basic Usage of PyTorch Pipeline", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "Before diving into details of AutoPipe, let us warm up the basic usage of PyTorch Pipeline (`torch.distributed.pipeline.sync.Pipe`, see [this tutorial](https://pytorch.org/docs/stable/pipeline.html)). More specially, we present a simple example to understand the design of Pipeline in practice:\n\n```python\n# Step 1: build a model including two linear layers\nfc1 = nn.Linear(16, 8).cuda(0)\nfc2 = nn.Linear(8, 4).cuda(1)\n\n# Step 2: wrap the two layers with nn.Sequential\nmodel = nn.Sequential(fc1, fc2)\n\n# Step 3: build Pipe (torch.distributed.pipeline.sync.Pipe)\nmodel = Pipe(model, chunks=8)\n\n# do training/inference\ninput = torch.rand(16, 16).cuda(0)\noutput_rref = model(input)\n```", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "In this basic example, we can see that before initializing `Pipe`, we need to partition the model `nn.Sequential` into multiple GPU devices and set optimal chunk number (`chunks`). Balancing computation time across partitions is critical to pipeline training speed, as skewed workload distributions across stages can lead to stragglers and forcing devices with lighter workloads to wait. The chunk number may also have a non-trivial influence on the throughput of the pipeline.\n\n\n### Balanced Pipeline Partitioning\n\nIn dynamic training system such as PipeTransformer, maintaining optimally balanced partitions in terms of parameter numbers does not guarantee the fastest training speed because other factors also play a crucial role:\n\n\n
\n
\nFigure 6. The partition boundary is in the middle of a skip connection\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "1. Cross-partition communication overhead. Placing a partition boundary in the middle of a skip connection leads to additional communications since tensors in the skip connection must now be copied to a different GPU. For example, with BERT partitions in Figure 6, partition
must take intermediate outputs from both partition
and partition
. In contrast, if the boundary is placed after the addition layer, the communication overhead between partition
and
is visibly smaller. Our measurements show that having cross-device communication is more expensive than having slightly imbalanced partitions (see the Appendix in our paper). Therefore, we do not consider breaking skip connections (highlighted separately as an entire attention layer and MLP layer in green color at line 7 in Algorithm 1.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "2. Frozen layer memory footprint. During training, AutoPipe must recompute partition boundaries several times to balance two distinct types of layers: frozen layers and active layers. The frozen layer's memory cost is a fraction of that inactive layer, given that the frozen layer does not need backward activation maps, optimizer states, and gradients. Instead of launching intrusive profilers to obtain thorough metrics on memory and computational cost, we define a tunable cost factor
to estimate the memory footprint ratio of a frozen layer over the same active layer. Based on empirical measurements in our experimental hardware, we set it to
.\n\n\n\n\n
\n
\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "Based on the above two considerations, AutoPipe balances pipeline partitions based on parameter sizes. More specifically, AutoPipe uses a greedy algorithm to allocate all frozen and active layers to evenly distribute partitioned sublayers into
GPU devices. Pseudocode is described as the `load\\_balance()` function in Algorithm 1. The frozen layers are extracted from the original model and kept in a separate model instance
in the first device of a pipeline.\n\nNote that the partition algorithm employed in this paper is not the only option; PipeTransformer is modularized to work with any alternatives.\n\n\n### Pipeline Compression", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "Pipeline compression helps to free up GPUs to accommodate more pipeline replicas and reduce the number of cross-device communications between partitions. To determine the timing of compression, we can estimate the memory cost of the largest partition after compression, and then compare it with that of the largest partition of a pipeline at timestep
. To avoid extensive memory profiling, the compression algorithm uses the parameter size as a proxy for the training memory footprint. Based on this simplification, the criterion of pipeline compression is as follows:\n\n\n
\n
\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "Once the freeze notification is received, AutoPipe will always attempt to divide the pipeline length
by 2 (e.g., from 8 to 4, then 2). By using
as the input, the compression algorithm can verify if the result satisfies the criterion in Equation (1). Pseudocode is shown in lines 25-33 in Algorithm 1. Note that this compression makes the acceleration ratio exponentially increase during training, meaning that if a GPU server has a larger number of GPUs (e.g., more than 8), the acceleration ratio will be further amplified.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
\nFigure 7. Pipeline Bubble:
, and
denote forward, backward, and the optimizer update of micro-batch
on device
, respectively. The total bubble size in each iteration is
times per micro-batch forward and backward cost.\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "Additionally, such a technique can also speed up training by shrinking the size of pipeline bubbles. To explain bubble sizes in a pipeline, Figure 7 depicts how 4 micro-batches run through a 4-device pipeline
. In general, the total bubble size is
times per micro-batch forward and backward cost. Therefore, it is clear that shorter pipelines have smaller bubble sizes.\n\n### Dynamic Number of Micro-Batches", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "Prior pipeline parallel systems use a fixed number of micro-batches per mini-batch (
). GPipe suggests
, where
is the number of partitions (pipeline length). However, given that PipeTransformer dynamically configures
, we find it to be sub-optimal to maintain a static
during training. Moreover, when integrated with DDP, the value of
also has an impact on the efficiency of DDP gradient synchronizations. Since DDP must wait for the last micro-batch to finish its backward computation on a parameter before launching its gradient synchronization, finer micro-batches lead to a smaller overlap between computation and communication. Hence, instead of using a static value, PipeTransformer searches for optimal
on the fly in the hybrid of DDP environment by enumerating
values ranging from
to
. For a specific training environment, the profiling needs only to be done once (see Algorithm 1 line 35).", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "For the complete source code, please refer to `https://github.com/Distributed-AI/PipeTransformer/blob/master/pipe_transformer/pipe/auto_pipe.py`.\n\n## AutoDP: Spawning More Pipeline Replicas\nAs AutoPipe compresses the same pipeline into fewer GPUs, AutoDP can automatically spawn new pipeline replicas to increase data-parallel width.\n\nDespite the conceptual simplicity, subtle dependencies on communications and states require careful design. The challenges are threefold:\n\n1. DDP Communication: Collective communications in PyTorch DDP requires static membership, which prevents new pipelines from connecting with existing ones;\n\n2. State Synchronization: newly activated processes must be consistent with existing pipelines in the training progress (e.g., epoch number and learning rate), weights and optimizer states, the boundary of frozen layers, and pipeline GPU range;", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "3. Dataset Redistribution: the dataset should be re-balanced to match a dynamic number of pipelines. This not only avoids stragglers but also ensures that gradients from all DDP processes are equally weighted.\n\n\n
\n
\nFigure 8. AutoDP: handling dynamical data-parallel with messaging between double process groups (Process 0-7 belong to machine 0, while process 8-15 belong to machine 1)\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "To tackle these challenges, we create double communication process groups for DDP. As in the example shown in Figure 8, the message process group (purple) is responsible for light-weight control messages and covers all processes, while the active training process group (yellow) only contains active processes and serves as a vehicle for heavy-weight tensor communications during training. The message group remains static, whereas the training group is dismantled and reconstructed to match active processes.\nIn T0, only processes 0 and 8 are active. During the transition to T1, process 0 activates processes 1 and 9 (newly added pipeline replicas) and synchronizes necessary information mentioned above using the message group. The four active processes then form a new training group, allowing static collective communications adaptive to dynamic memberships.\nTo redistribute the dataset, we implement a variant of DistributedSampler that can seamlessly adjust data samples to match the number of active pipeline replicas.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "The above design also naturally helps to reduce DDP communication overhead. More specifically, when transitioning from T0 to T1, processes 0 and 1 destroy the existing DDP instances, and active processes construct a new DDP training group using a cached pipelined model (AutoPipe stores frozen model and cached model separately).\n\nWe use the following APIs to implement the design above.\n\n```python\nimport torch.distributed as dist\nfrom torch.nn.parallel import DistributedDataParallel as DDP\n\n# initialize the process group (this must be called in the initialization of PyTorch DDP)\ndist.init_process_group(init_method='tcp://' + str(self.config.master_addr) + ':' +\nstr(self.config.master_port), backend=Backend.GLOO, rank=self.global_rank, world_size=self.world_size)\n...\n\n# create active process group (yellow color)\nself.active_process_group = dist.new_group(ranks=self.active_ranks, backend=Backend.NCCL, timeout=timedelta(days=365))\n...", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "# create message process group (yellow color)\nself.comm_broadcast_group = dist.new_group(ranks=[i for i in range(self.world_size)], backend=Backend.GLOO, timeout=timedelta(days=365))\n...\n\n# create DDP-enabled model when the number of data-parallel workers is changed. Note:\n# 1. The process group to be used for distributed data all-reduction.\nIf None, the default process group, which is created by torch.distributed.init_process_group, will be used.\nIn our case, we set it as self.active_process_group\n# 2. device_ids should be set when the pipeline length = 1 (the model resides on a single CUDA device).\n\nself.pipe_len = gpu_num_per_process\nif gpu_num_per_process > 1:\n model = DDP(model, process_group=self.active_process_group, find_unused_parameters=True)\nelse:\n model = DDP(model, device_ids=[self.local_rank], process_group=self.active_process_group, find_unused_parameters=True)", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "# to broadcast message among processes, we use dist.broadcast_object_list\ndef dist_broadcast(object_list, src, group):\n \"\"\"Broadcasts a given object to all parties.\"\"\"\n dist.broadcast_object_list(object_list, src, group=group)\n return object_list\n```\nFor the complete source code, please refer to `https://github.com/Distributed-AI/PipeTransformer/blob/master/pipe_transformer/dp/auto_dp.py`.\n\n# Experiments\n\nThis section first summarizes experiment setups and then evaluates PipeTransformer using computer vision and natural language processing tasks.\n\nHardware. Experiments were conducted on 2 identical machines connected by InfiniBand CX353A (
GB/s), where each machine is equipped with 8 NVIDIA Quadro RTX 5000 (16GB GPU memory). GPU-to-GPU bandwidth within a machine (PCI 3.0, 16 lanes) is
GB/s.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "Implementation. We used PyTorch Pipe as a building block. The BERT model definition, configuration, and related tokenizer are from HuggingFace 3.5.0. We implemented Vision Transformer using PyTorch by following its TensorFlow implementation. More details can be found in our source code.\n\nModels and Datasets. Experiments employ two representative Transformers in CV and NLP: Vision Transformer (ViT) and BERT. ViT was run on an image classification task, initialized with pre-trained weights on ImageNet21K and fine-tuned on ImageNet and CIFAR-100. BERT was run on two tasks, text classification on the SST-2 dataset from the General Language Understanding Evaluation (GLUE) benchmark, and question answering on the SQuAD v1.1 Dataset (Stanford Question Answering), which is a collection of 100k crowdsourced question/answer pairs.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "Training Schemes. Given that large models normally would require thousands of GPU-days {\\emph{e.g.}, GPT-3) if trained from scratch, fine-tuning downstream tasks using pre-trained models has become a trend in CV and NLP communities. Moreover, PipeTransformer is a complex training system that involves multiple core components. Thus, for the first version of PipeTransformer system development and algorithmic research, it is not cost-efficient to develop and evaluate from scratch using large-scale pre-training. Therefore, the experiments presented in this section focuses on pre-trained models. Note that since the model architectures in pre-training and fine-tuning are the same, PipeTransformer can serve both. We discussed pre-training results in the Appendix.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "Baseline. Experiments in this section compare PipeTransformer to the state-of-the-art framework, a hybrid scheme of PyTorch Pipeline (PyTorch\u2019s implementation of GPipe) and PyTorch DDP. Since this is the first paper that studies accelerating distributed training by freezing layers, there are no perfectly aligned counterpart solutions yet.\n\nHyper-parameters. Experiments use ViT-B/16 (12 transformer layers,
input patch size) for ImageNet and CIFAR-100, BERT-large-uncased (24 layers) for SQuAD 1.1, and BERT-base-uncased (12 layers) for SST-2. With PipeTransformer, ViT and BERT training can set the per-pipeline batch size to around 400 and 64, respectively. Other hyperparameters (e.g., epoch, learning rate) for all experiments are presented in Appendix.\n\n## Overall Training Acceleration\n\n
\n
\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "We summarize the overall experimental results in the table above. Note that the speedup we report is based on a conservative
value that can obtain comparable or even higher accuracy. A more aggressive
(
,
) can obtain a higher speedup but may lead to a slight loss in accuracy. Note that the model size of BERT (24 layers) is larger than ViT-B/16 (12 layers), thus it takes more time for communication.\n\n## Performance Analysis\n\n### Speedup Breakdown\n\nThis section presents evaluation results and analyzes the performance of different components in \\autopipe. More experimental results can be found in the Appendix.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
\nFigure 9. Speedup Breakdown (ViT on ImageNet)\n
\n\nTo understand the efficacy of all four components and their impacts on training speed, we experimented with different combinations and used their training sample throughput (samples/second) and speedup ratio as metrics. Results are illustrated in Figure 9. Key takeaways from these experimental results are:\n\n1. the main speedup is the result of elastic pipelining which is achieved through the joint use of AutoPipe and AutoDP;\n2. AutoCache's contribution is amplified by AutoDP;\n3. freeze training alone without system-wise adjustment even downgrades the training speed.\n\n### Tuning
in Freezing Algorithm", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
\nFigure 10. Tuning
in Freezing Algorithm\n
\n\nWe ran experiments to show how the
in the freeze algorithms influences training speed. The result clearly demonstrates that a larger
(excessive freeze) leads to a greater speedup but suffers from a slight performance degradation. In the case shown in Figure 10, where
, freeze training outperforms normal training and obtains a
-fold speedup. We provide more results in the Appendix.\n\n### Optimal Chunks in the elastic pipeline", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
\nFigure 11. Optimal chunk number in the elastic pipeline\n
\n\nWe profiled the optimal number of micro-batches
for different pipeline lengths
. Results are summarized in Figure 11. As we can see, different
values lead to different optimal
, and the throughput gaps across different M values are large (as shown when
), which confirms the necessity of an anterior profiler in elastic pipelining.\n\n### Understanding the Timing of Caching\n\n\n
\n
\nFigure 12. the timing of caching\n
", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "To evaluate AutoCache, we compared the sample throughput of training that activates AutoCache from epoch
(blue) with the training job without AutoCache (red). Figure 12 shows that enabling caching too early can slow down training, as caching can be more expensive than the forward propagation on a small number of frozen layers. After more layers are frozen, caching activations clearly outperform the corresponding forward propagation. As a result, AutoCache uses a profiler to determine the proper timing to enable caching. In our system, for ViT (12 layers), caching starts from 3 frozen layers, while for BERT (24 layers), caching starts from 5 frozen layers.\n\nFor more detailed experimental analysis, please refer to our paper.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "# Summarization\nThis blog introduces PipeTransformer, a holistic solution that combines elastic pipeline-parallel and data-parallel for distributed training using PyTorch Distributed APIs. More specifically, PipeTransformer incrementally freezes layers in the pipeline, packs remaining active layers into fewer GPUs, and forks more pipeline replicas to increase the data-parallel width. Evaluations on ViT and BERT models show that compared to the state-of-the-art baseline, PipeTransformer attains up to 2.83\u00d7 speedups without accuracy loss.\n\n\n# Reference\n\n[1] Li, S., Zhao, Y., Varma, R., Salpekar, O., Noordhuis, P., Li,T., Paszke, A., Smith, J., Vaughan, B., Damania, P., et al. Pytorch Distributed: Experiences on Accelerating Dataparallel Training. Proceedings of the VLDB Endowment,13(12), 2020\n\n[2] Devlin, J., Chang, M. W., Lee, K., and Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT, 2019", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "[3] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al. An image is Worth 16x16 words: Transformers for Image Recognition at Scale.\n\n[4] Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language Models are Few-shot Learners.\n\n[5] Lepikhin, D., Lee, H., Xu, Y., Chen, D., Firat, O., Huang, Y., Krikun, M., Shazeer, N., and Chen, Z. Gshard: Scaling Giant Models with Conditional Computation and Automatic Sharding.\n\n[6] Li, M., Andersen, D. G., Park, J. W., Smola, A. J., Ahmed, A., Josifovski, V., Long, J., Shekita, E. J., and Su, B. Y. Scaling Distributed Machine Learning with the Parameter Server. In 11th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 14), pp. 583\u2013598, 2014.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "[7] Jiang, Y., Zhu, Y., Lan, C., Yi, B., Cui, Y., and Guo, C. A Unified Architecture for Accelerating Distributed DNN Training in Heterogeneous GPU/CPU Clusters. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20), pp. 463\u2013479. USENIX Association, November 2020. ISBN 978-1-939133-19- 9.\n\n[8] Kim, S., Yu, G. I., Park, H., Cho, S., Jeong, E., Ha, H., Lee, S., Jeong, J. S., and Chun, B. G. Parallax: Sparsity-aware Data Parallel Training of Deep Neural Networks. In Proceedings of the Fourteenth EuroSys Conference 2019, pp. 1\u201315, 2019.\n\n[9] Kim, C., Lee, H., Jeong, M., Baek, W., Yoon, B., Kim, I., Lim, S., and Kim, S. TorchGPipe: On-the-fly Pipeline Parallelism for Training Giant Models.\n\n[10] Huang, Y., Cheng, Y., Bapna, A., Firat, O., Chen, M. X., Chen, D., Lee, H., Ngiam, J., Le, Q. V., Wu, Y., et al. Gpipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "[11] Park, J. H., Yun, G., Yi, C. M., Nguyen, N. T., Lee, S., Choi, J., Noh, S. H., and ri Choi, Y. Hetpipe: Enabling Large DNN Training on (whimpy) Heterogeneous GPU Clusters through Integration of Pipelined Model Parallelism and Data Parallelism. In 2020 USENIX Annual Technical Conference (USENIX ATC 20), pp. 307\u2013321. USENIX Association, July 2020. ISBN 978-1-939133- 14-4.\n\n[12] Narayanan, D., Harlap, A., Phanishayee, A., Seshadri, V., Devanur, N. R., Ganger, G. R., Gibbons, P. B., and Zaharia, M. Pipedream: Generalized Pipeline Parallelism for DNN Training. In Proceedings of the 27th ACM Symposium on Operating Systems Principles, SOSP \u201919, pp. 1\u201315, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450368735. doi: 10.1145/3341301.3359646.\n\n[13] Lepikhin, D., Lee, H., Xu, Y., Chen, D., Firat, O., Huang, Y., Krikun, M., Shazeer, N., and Chen, Z. Gshard: Scaling Giant Models with Conditional Computation and Automatic Sharding.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
+{"page_content": "[14] Shazeer, N., Cheng, Y., Parmar, N., Tran, D., Vaswani, A., Koanantakool, P., Hawkins, P., Lee, H., Hong, M., Young, C., Sepassi, R., and Hechtman, B. Mesh-Tensorflow: Deep Learning for Supercomputers. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 31, pp. 10414\u201310423. Curran Associates, Inc., 2018.\n\n[15] Shoeybi, M., Patwary, M., Puri, R., LeGresley, P., Casper, J., and Catanzaro, B. Megatron-LM: Training Multi-billion Parameter Language Models using Model Parallelism.\n\n[16] Rajbhandari, S., Rasley, J., Ruwase, O., and He, Y. ZERO: Memory Optimization towards Training a Trillion Parameter Models.\n\n[17] Raghu, M., Gilmer, J., Yosinski, J., and Sohl Dickstein, J. Svcca: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability. In NIPS, 2017.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
{"page_content": "[18] Morcos, A., Raghu, M., and Bengio, S. Insights on Representational Similarity in Neural Networks with Canonical Correlation. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 31, pp. 5732\u20135741. Curran Associates, Inc., 2018.", "metadata": {"source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'Model Serving in PyTorch'\nauthor: Jeff Smith\nredirect_from: /2019/05/08/model-serving-in-pyorch.html\n---", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch has seen a lot of adoption in research, but people can get confused about how well PyTorch models can be taken into production. This blog post is meant to clear up any confusion people might have about the road to production in PyTorch.\nUsually when people talk about taking a model \u201cto production,\u201d they usually mean performing **inference**, sometimes called model evaluation or prediction or serving. At the level of a function call, in PyTorch, inference looks something like this:", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "* In Python\n * `module(input)`\n* In traced modules\n * `module(input)`\n* In C++\n * `at::Tensor output = module->forward(inputs).toTensor();`\n\nSince we at Facebook perform inference operations using PyTorch hundreds of trillions of times per day, we've done a lot to make sure that inference runs as efficiently as possible.", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "Serving Strategies\n\nThat zoomed-in view of how you use models in inference isn't usually the whole story, though. In a real world machine learning system, you often need to do more than just run a single inference operation in the REPL or Jupyter notebook. Instead, you usually need to integrate your model into a larger application in some way. Depending on what you need to do, you can usually take one of the following approaches.", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "Direct embedding", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "In application settings like mobile, we often just directly call the model as part of a larger program. This isn't just for apps; usually this is how robotics and dedicated devices work as well. At a code-level, the call to the model is exactly the same as what is shown above in the section about inference shown above. A key concern is often that a Python interpreter is not present in such environments, which is why PyTorch allows you to call your models from C++ and ship a model without the need for a", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "Python runtime.", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "Model microservices", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "If you're using your model in a server side context and you're managing multiple models, you might choose to treat each individual model (or each individual model version) as a separate service, usually using some sort of packaging mechanism like a Docker container. Then that service is often made network accessible via some sort of service, either using JSON over HTTP or an RPC technology like gRPC. The key characteristic of this approach is that you're defining a service with a single endpoint that just", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "calls your model. Then you do do all of your model management (promotion, rollback, etc.) via whatever system you already use to manage your services (e.g. kubernetes, ECS).", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "Model servers", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "An additional possible solution is to use a model server. This is an application built to manage and serve models. It allows you to upload multiple models and get distinct prediction endpoints for each of them. Typically such systems include a number of other features to help solve more of the whole problem of managing and serving models. This can include things like metrics, visualization, data pre-processing, and more. Even something as simple as having a system for automatically versioning models can", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "make building important features like model rollbacks much easier.", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "Evolving Patterns", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "The above is a somewhat arbitrary breakdown of different approaches based on a snapshot in time. Design patterns are still evolving. Recently, model server designs have started to adopt more of the technologies of general service infrastructure such as Docker containers and kubernetes, so many model servers have started to share properties of the model microservice design discussed above. For a deeper dive into the general concepts of model server designs, you can check out my [book on machine learning", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "systems](https://www.manning.com/books/machine-learning-systems).", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "Serving PyTorch Models\n\nSo, if you're a PyTorch user, what should you use if you want to take your models to production?", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "If you're on mobile or working on an embedded system like a robot, direct embedding in your application is often the right choice. \nFor mobile specifically, your use case might be served by the ONNX export functionality.\nNote that ONNX, by its very nature, has limitations and doesn't support all of the functionality provided by the larger PyTorch project.", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "You can check out [this tutorial](https://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html) on deploying PyTorch models to mobile using ONNX to see if this path might suit your use case. \nThat said, we've heard that there's a lot more that PyTorch users want to do on mobile, so look for more mobile-specific functionality in PyTorch in the future.", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "For other embedded systems, like robots, running [inference on a PyTorch model from the C++ API](https://pytorch.org/tutorials/advanced/cpp_export.html) could be the right solution.", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "If you can't use the cloud or prefer to manage all services using the same technology, you can follow [this example](https://medium.com/datadriveninvestor/deploy-your-pytorch-model-to-production-f69460192217) to build a simple model microservice using the Flask web framework.", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "If you want to manage multiple models within a non-cloud service solution, there are teams developing PyTorch support in model servers like [MLFlow](https://mlflow.org/), [Kubeflow](https://www.kubeflow.org/), and [RedisAI.](https://oss.redislabs.com/redisai/) We're excited to see innovation from multiple teams building OSS model servers, and we'll continue to highlight innovation in the PyTorch ecosystem in the future.", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "If you can use the cloud for your application, there are several great choices for working with models in the cloud. For AWS Sagemaker, you can start find a guide to [all of the resources from AWS for working with PyTorch](https://docs.aws.amazon.com/sagemaker/latest/dg/pytorch.html), including docs on how to use the [Sagemaker Python SDK](https://sagemaker.readthedocs.io/en/stable/using_pytorch.html). You can also see [some](https://youtu.be/5h1Ot2dPi2E) [talks](https://youtu.be/qc5ZikKw9_w) we've given on", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "using PyTorch on Sagemaker. Finally, if you happen to be using PyTorch via FastAI, then they've written a [really simple guide](https://course.fast.ai/deployment_amzn_sagemaker.html) to getting up and running on Sagemaker.", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "The story is similar across other major clouds. On Google Cloud, you can follow [these instructions](https://cloud.google.com/deep-learning-vm/docs/pytorch_start_instance) to get access to a Deep Learning VM with PyTorch pre-installed. On Microsoft Azure, you have a number of ways to get started from [Azure Machine Learning Service](https://azure.microsoft.com/en-us/services/machine-learning-service/) to [Azure Notebooks](https://notebooks.azure.com/pytorch/projects/tutorials) showing how to use PyTorch.", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
-{"page_content": "Your Models\n\nWhichever approach you take to bringing your PyTorch models to production, we want to support you and enable your success. Do you love one of the options above? Are you having difficulty with that one crucial feature you can't find support for? We'd love to discuss more on the [deployment category](https://discuss.pytorch.org/c/deployment) on the PyTorch Discuss forums. We'd love to help, and where you're seeing success, amplify your story.", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'Model Serving in PyTorch'\nauthor: Jeff Smith\nredirect_from: /2019/05/08/model-serving-in-pyorch.html\n---\n\nPyTorch has seen a lot of adoption in research, but people can get confused about how well PyTorch models can be taken into production. This blog post is meant to clear up any confusion people might have about the road to production in PyTorch.\nUsually when people talk about taking a model \u201cto production,\u201d they usually mean performing **inference**, sometimes called model evaluation or prediction or serving. At the level of a function call, in PyTorch, inference looks something like this:\n\n* In Python\n * `module(input)`\n* In traced modules\n * `module(input)`\n* In C++\n * `at::Tensor output = module->forward(inputs).toTensor();`\n\nSince we at Facebook perform inference operations using PyTorch hundreds of trillions of times per day, we've done a lot to make sure that inference runs as efficiently as possible.\n\n## Serving Strategies", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
+{"page_content": "That zoomed-in view of how you use models in inference isn't usually the whole story, though. In a real world machine learning system, you often need to do more than just run a single inference operation in the REPL or Jupyter notebook. Instead, you usually need to integrate your model into a larger application in some way. Depending on what you need to do, you can usually take one of the following approaches.\n\n### Direct embedding\n\nIn application settings like mobile, we often just directly call the model as part of a larger program. This isn't just for apps; usually this is how robotics and dedicated devices work as well. At a code-level, the call to the model is exactly the same as what is shown above in the section about inference shown above. A key concern is often that a Python interpreter is not present in such environments, which is why PyTorch allows you to call your models from C++ and ship a model without the need for a Python runtime.\n\n### Model microservices", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
+{"page_content": "If you're using your model in a server side context and you're managing multiple models, you might choose to treat each individual model (or each individual model version) as a separate service, usually using some sort of packaging mechanism like a Docker container. Then that service is often made network accessible via some sort of service, either using JSON over HTTP or an RPC technology like gRPC. The key characteristic of this approach is that you're defining a service with a single endpoint that just calls your model. Then you do do all of your model management (promotion, rollback, etc.) via whatever system you already use to manage your services (e.g. kubernetes, ECS).\n\n### Model servers", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
+{"page_content": "### Model servers\n\nAn additional possible solution is to use a model server. This is an application built to manage and serve models. It allows you to upload multiple models and get distinct prediction endpoints for each of them. Typically such systems include a number of other features to help solve more of the whole problem of managing and serving models. This can include things like metrics, visualization, data pre-processing, and more. Even something as simple as having a system for automatically versioning models can make building important features like model rollbacks much easier.\n\n### Evolving Patterns", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
+{"page_content": "The above is a somewhat arbitrary breakdown of different approaches based on a snapshot in time. Design patterns are still evolving. Recently, model server designs have started to adopt more of the technologies of general service infrastructure such as Docker containers and kubernetes, so many model servers have started to share properties of the model microservice design discussed above. For a deeper dive into the general concepts of model server designs, you can check out my [book on machine learning systems](https://www.manning.com/books/machine-learning-systems).\n\n## Serving PyTorch Models\n\nSo, if you're a PyTorch user, what should you use if you want to take your models to production?", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
+{"page_content": "If you're on mobile or working on an embedded system like a robot, direct embedding in your application is often the right choice. \nFor mobile specifically, your use case might be served by the ONNX export functionality.\nNote that ONNX, by its very nature, has limitations and doesn't support all of the functionality provided by the larger PyTorch project.\nYou can check out [this tutorial](https://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html) on deploying PyTorch models to mobile using ONNX to see if this path might suit your use case. \nThat said, we've heard that there's a lot more that PyTorch users want to do on mobile, so look for more mobile-specific functionality in PyTorch in the future.\nFor other embedded systems, like robots, running [inference on a PyTorch model from the C++ API](https://pytorch.org/tutorials/advanced/cpp_export.html) could be the right solution.", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
+{"page_content": "If you can't use the cloud or prefer to manage all services using the same technology, you can follow [this example](https://medium.com/datadriveninvestor/deploy-your-pytorch-model-to-production-f69460192217) to build a simple model microservice using the Flask web framework.\n\nIf you want to manage multiple models within a non-cloud service solution, there are teams developing PyTorch support in model servers like [MLFlow](https://mlflow.org/), [Kubeflow](https://www.kubeflow.org/), and [RedisAI.](https://oss.redislabs.com/redisai/) We're excited to see innovation from multiple teams building OSS model servers, and we'll continue to highlight innovation in the PyTorch ecosystem in the future.", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
+{"page_content": "If you can use the cloud for your application, there are several great choices for working with models in the cloud. For AWS Sagemaker, you can start find a guide to [all of the resources from AWS for working with PyTorch](https://docs.aws.amazon.com/sagemaker/latest/dg/pytorch.html), including docs on how to use the [Sagemaker Python SDK](https://sagemaker.readthedocs.io/en/stable/using_pytorch.html). You can also see [some](https://youtu.be/5h1Ot2dPi2E) [talks](https://youtu.be/qc5ZikKw9_w) we've given on using PyTorch on Sagemaker. Finally, if you happen to be using PyTorch via FastAI, then they've written a [really simple guide](https://course.fast.ai/deployment_amzn_sagemaker.html) to getting up and running on Sagemaker.", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
+{"page_content": "The story is similar across other major clouds. On Google Cloud, you can follow [these instructions](https://cloud.google.com/deep-learning-vm/docs/pytorch_start_instance) to get access to a Deep Learning VM with PyTorch pre-installed. On Microsoft Azure, you have a number of ways to get started from [Azure Machine Learning Service](https://azure.microsoft.com/en-us/services/machine-learning-service/) to [Azure Notebooks](https://notebooks.azure.com/pytorch/projects/tutorials) showing how to use PyTorch.\n\n## Your Models\n\nWhichever approach you take to bringing your PyTorch models to production, we want to support you and enable your success. Do you love one of the options above? Are you having difficulty with that one crucial feature you can't find support for? We'd love to discuss more on the [deployment category](https://discuss.pytorch.org/c/deployment) on the PyTorch Discuss forums. We'd love to help, and where you're seeing success, amplify your story.", "metadata": {"source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"}}
{"page_content": "---\nlayout: blog_detail\ntitle: 'Accelerating PyTorch with CUDA Graphs'\nauthor: Vinh Nguyen, Michael Carilli, Sukru Burc Eryilmaz, Vartika Singh, Michelle Lin, Natalia Gimelshein, Alban Desmaison, Edward Yang\nfeatured-img: 'assets/images/cudagraphs-pytorch.png'\n---", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "Today, we are pleased to announce a new advanced CUDA feature, CUDA Graphs, has been brought to PyTorch. Modern DL frameworks have complicated software stacks that incur significant overheads associated with the submission of each operation to the GPU. When DL workloads are strong-scaled to many GPUs for performance, the time taken by each GPU operation diminishes to just a few microseconds and, in these cases, the high work submission latencies of frameworks often lead to low utilization of the GPU. As", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "GPUs get faster and workloads are scaled to more devices, the likelihood of workloads suffering from these launch-induced stalls increases. To overcome these performance overheads, NVIDIA engineers worked with PyTorch developers to enable CUDA graph execution natively in PyTorch. This design was instrumental in scaling NVIDIA\u2019s MLPerf workloads (implemented in PyTorch) to over 4000 GPUs in order to achieve [record-breaking performance](https://blogs.nvidia.com/blog/2021/06/30/mlperf-ai-training-partners/).", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "Today, we are pleased to announce a new advanced CUDA feature, CUDA Graphs, has been brought to PyTorch. Modern DL frameworks have complicated software stacks that incur significant overheads associated with the submission of each operation to the GPU. When DL workloads are strong-scaled to many GPUs for performance, the time taken by each GPU operation diminishes to just a few microseconds and, in these cases, the high work submission latencies of frameworks often lead to low utilization of the GPU. As GPUs get faster and workloads are scaled to more devices, the likelihood of workloads suffering from these launch-induced stalls increases. To overcome these performance overheads, NVIDIA engineers worked with PyTorch developers to enable CUDA graph execution natively in PyTorch. This design was instrumental in scaling NVIDIA\u2019s MLPerf workloads (implemented in PyTorch) to over 4000 GPUs in order to achieve [record-breaking performance](https://blogs.nvidia.com/blog/2021/06/30/mlperf-ai-training-partners/).", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
{"page_content": "\n

\n
", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "CUDA graphs support in PyTorch is just one more example of a long collaboration between NVIDIA and Facebook engineers. [torch.cuda.amp](https://pytorch.org/docs/stable/amp.html), for example, trains with half precision while maintaining the network accuracy achieved with single precision and automatically utilizing tensor cores wherever possible. AMP delivers up to 3X higher performance than FP32 with just a few lines of code change. Similarly, NVIDIA\u2019s [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "was trained using PyTorch on up to 3072 GPUs. In PyTorch, one of the most performant methods to scale-out GPU training is with [torch.nn.parallel.DistributedDataParallel](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel) coupled with the NVIDIA Collective Communications Library ([NCCL](https://developer.nvidia.com/nccl)) backend.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "CUDA graphs support in PyTorch is just one more example of a long collaboration between NVIDIA and Facebook engineers. [torch.cuda.amp](https://pytorch.org/docs/stable/amp.html), for example, trains with half precision while maintaining the network accuracy achieved with single precision and automatically utilizing tensor cores wherever possible. AMP delivers up to 3X higher performance than FP32 with just a few lines of code change. Similarly, NVIDIA\u2019s [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) was trained using PyTorch on up to 3072 GPUs. In PyTorch, one of the most performant methods to scale-out GPU training is with [torch.nn.parallel.DistributedDataParallel](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel) coupled with the NVIDIA Collective Communications Library ([NCCL](https://developer.nvidia.com/nccl)) backend.\n\n\n# CUDA Graphs", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
{"page_content": "# CUDA Graphs\n\n\n[CUDA Graphs](https://developer.nvidia.com/blog/cuda-10-features-revealed/), which made its debut in CUDA 10, let a series of CUDA kernels to be defined and encapsulated as a single unit, i.e., a graph of operations, rather than a sequence of individually-launched operations. It provides a mechanism to launch multiple GPU operations through a single CPU operation, and hence reduces the launching overheads.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "The benefits of CUDA graphs can be demonstrated with the simple example in Figure 1. On the top, a sequence of short kernels is launched one-by-one by the CPU. The CPU launching overhead creates a significant gap in between the kernels. If we replace this sequence of kernels with a CUDA graph, initially we will need to spend a little extra time on building the graph and launching the whole graph in one go on the first occasion, but subsequent executions will be very fast, as there will be very little gap", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "between the kernels. The difference is more pronounced when the same sequence of operations is repeated many times, for example, overy many training steps. In that case, the initial costs of building and launching the graph will be amortized over the entire number of training iterations. For a more comprehensive introduction on the topic, see our blog", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "[Getting Started with CUDA Graphs](https://developer.nvidia.com/blog/cuda-graphs) and GTC talk [Effortless CUDA Graphs](https://www.nvidia.com/en-us/on-demand/session/gtcspring21-s32082/).", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\tFigure 1. Benefits of using CUDA graphs\n
", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "NCCL support for CUDA graphs\n\n\nThe previously mentioned benefits of reducing launch overheads also extend to NCCL kernel launches. NCCL enables GPU-based collective and P2P communications. With [NCCL support for CUDA graphs](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/cudagraph.html), we can eliminate the NCCL kernel launch overhead.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "Additionally, kernel launch timing can be unpredictable due to various CPU load and operating system factors. Such time skews can be harmful to the performance of NCCL collective operations. With CUDA graphs, kernels are clustered together so that performance is consistent across ranks in a distributed workload. This is especially useful in large clusters where even a single slow node can bring down overall cluster level performance.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "For distributed multi-GPU workloads, NCCL is used for collective communications. If we look at training a neural network that leverages data parallelism, without NCCL support for CUDA graphs, we\u2019ll need a separate launch for each of forward/back propagation and NCCL AllReduce. By contrast, with NCCL support for CUDA graphs, we can reduce launch overhead by lumping together the forward/backward propagation and NCCL AllReduce all in a single graph launch.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n Figure 2. Looking at a typical neural network, all the kernel launches for NCCL AllReduce can be bundled into a graph to reduce overhead launch time.\n
\n\n\n# PyTorch CUDA Graphs", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "From PyTorch v1.10, the CUDA graphs functionality is made available as a set of beta APIs.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "API overview", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch supports the construction of CUDA graphs using [stream capture](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#creating-a-graph-using-stream-capture), which puts a CUDA stream in capture mode. CUDA work issued to a capturing stream doesn\u2019t actually run on the GPU. Instead, the work is recorded in a graph. After capture, the graph can be launched to run the GPU work as many times as needed. Each replay runs the same kernels with the same arguments. For pointer arguments this means", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "the same memory addresses are used. By filling input memory with new data (e.g., from a new batch) before each replay, you can rerun the same work on new data.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "Replaying a graph sacrifices the dynamic flexibility of typical eager execution in exchange for greatly reduced CPU overhead. A graph\u2019s arguments and kernels are fixed, so a graph replay skips all layers of argument setup and kernel dispatch, including Python, C++, and CUDA driver overheads. Under the hood, a replay submits the entire graph\u2019s work to the GPU with a single call to", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "[cudaGraphLaunch](https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__GRAPH.html#group__CUDART__GRAPH_1g1accfe1da0c605a577c22d9751a09597). Kernels in a replay also execute slightly faster on the GPU, but eliding CPU overhead is the main benefit.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "You should try CUDA graphs if all or part of your network is graph-safe (usually this means static shapes and static control flow, but see the other [constraints](https://pytorch.org/docs/master/notes/cuda.html#constraints)) and you suspect its runtime is at least somewhat CPU-limited.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "API example\n\nPyTorch exposes graphs via a raw [`torch.cuda.CUDAGraph`](https://pytorch.org/docs/master/generated/torch.cuda.graph.html#torch.cuda.graph)class and two convenience wrappers, [`torch.cuda.graph`](https://pytorch.org/docs/master/generated/torch.cuda.graph.html#torch.cuda.graph) and [`torch.cuda.make_graphed_callables`](https://pytorch.org/docs/master/generated/torch.cuda.make_graphed_callables.html#torch.cuda.make_graphed_callables).", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "[`torch.cuda.graph`](https://pytorch.org/docs/master/generated/torch.cuda.graph.html#torch.cuda.graph) is a simple, versatile context manager that captures CUDA work in its context. Before capture, warm up the workload to be captured by running a few eager iterations. Warmup must occur on a side stream. Because the graph reads from and writes to the same memory addresses in every replay, you must maintain long-lived references to tensors that hold input and output data during capture. To run the graph on", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "new input data, copy new data to the capture\u2019s input tensor(s), replay the graph, then read the new output from the capture\u2019s output tensor(s).", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "If the entire network is capture safe, one can capture and replay the whole network as in the following example. \n\n```python\nN, D_in, H, D_out = 640, 4096, 2048, 1024\nmodel = torch.nn.Sequential(torch.nn.Linear(D_in, H),\n torch.nn.Dropout(p=0.2),\n torch.nn.Linear(H, D_out),\n torch.nn.Dropout(p=0.1)).cuda()\nloss_fn = torch.nn.MSELoss()\noptimizer = torch.optim.SGD(model.parameters(), lr=0.1)", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "# Placeholders used for capture\nstatic_input = torch.randn(N, D_in, device='cuda')\nstatic_target = torch.randn(N, D_out, device='cuda')", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "# warmup\n# Uses static_input and static_target here for convenience,\n# but in a real setting, because the warmup includes optimizer.step()\n# you must use a few batches of real data.\ns = torch.cuda.Stream()\ns.wait_stream(torch.cuda.current_stream())\nwith torch.cuda.stream(s):\n for i in range(3):\n optimizer.zero_grad(set_to_none=True)\n y_pred = model(static_input)\n loss = loss_fn(y_pred, static_target)\n loss.backward()\n optimizer.step()", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "torch.cuda.current_stream().wait_stream(s)", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "The benefits of CUDA graphs can be demonstrated with the simple example in Figure 1. On the top, a sequence of short kernels is launched one-by-one by the CPU. The CPU launching overhead creates a significant gap in between the kernels. If we replace this sequence of kernels with a CUDA graph, initially we will need to spend a little extra time on building the graph and launching the whole graph in one go on the first occasion, but subsequent executions will be very fast, as there will be very little gap between the kernels. The difference is more pronounced when the same sequence of operations is repeated many times, for example, overy many training steps. In that case, the initial costs of building and launching the graph will be amortized over the entire number of training iterations. For a more comprehensive introduction on the topic, see our blog \n [Getting Started with CUDA Graphs](https://developer.nvidia.com/blog/cuda-graphs) and GTC talk [Effortless CUDA Graphs](https://www.nvidia.com/en-us/on-demand/session/gtcspring21-s32082/).", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
\n\tFigure 1. Benefits of using CUDA graphs\n
\n\n\n## NCCL support for CUDA graphs\n\n\nThe previously mentioned benefits of reducing launch overheads also extend to NCCL kernel launches. NCCL enables GPU-based collective and P2P communications. With [NCCL support for CUDA graphs](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/cudagraph.html), we can eliminate the NCCL kernel launch overhead.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "Additionally, kernel launch timing can be unpredictable due to various CPU load and operating system factors. Such time skews can be harmful to the performance of NCCL collective operations. With CUDA graphs, kernels are clustered together so that performance is consistent across ranks in a distributed workload. This is especially useful in large clusters where even a single slow node can bring down overall cluster level performance.\n\nFor distributed multi-GPU workloads, NCCL is used for collective communications. If we look at training a neural network that leverages data parallelism, without NCCL support for CUDA graphs, we\u2019ll need a separate launch for each of forward/back propagation and NCCL AllReduce. By contrast, with NCCL support for CUDA graphs, we can reduce launch overhead by lumping together the forward/backward propagation and NCCL AllReduce all in a single graph launch.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
\n Figure 2. Looking at a typical neural network, all the kernel launches for NCCL AllReduce can be bundled into a graph to reduce overhead launch time.\n
\n\n\n# PyTorch CUDA Graphs\n\n\nFrom PyTorch v1.10, the CUDA graphs functionality is made available as a set of beta APIs. \n\n### API overview", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "### API overview\n\nPyTorch supports the construction of CUDA graphs using [stream capture](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#creating-a-graph-using-stream-capture), which puts a CUDA stream in capture mode. CUDA work issued to a capturing stream doesn\u2019t actually run on the GPU. Instead, the work is recorded in a graph. After capture, the graph can be launched to run the GPU work as many times as needed. Each replay runs the same kernels with the same arguments. For pointer arguments this means the same memory addresses are used. By filling input memory with new data (e.g., from a new batch) before each replay, you can rerun the same work on new data.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "Replaying a graph sacrifices the dynamic flexibility of typical eager execution in exchange for greatly reduced CPU overhead. A graph\u2019s arguments and kernels are fixed, so a graph replay skips all layers of argument setup and kernel dispatch, including Python, C++, and CUDA driver overheads. Under the hood, a replay submits the entire graph\u2019s work to the GPU with a single call to [cudaGraphLaunch](https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__GRAPH.html#group__CUDART__GRAPH_1g1accfe1da0c605a577c22d9751a09597). Kernels in a replay also execute slightly faster on the GPU, but eliding CPU overhead is the main benefit.\n\nYou should try CUDA graphs if all or part of your network is graph-safe (usually this means static shapes and static control flow, but see the other [constraints](https://pytorch.org/docs/master/notes/cuda.html#constraints)) and you suspect its runtime is at least somewhat CPU-limited.\n\n### API example", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "### API example\n\nPyTorch exposes graphs via a raw [`torch.cuda.CUDAGraph`](https://pytorch.org/docs/master/generated/torch.cuda.graph.html#torch.cuda.graph)class and two convenience wrappers, [`torch.cuda.graph`](https://pytorch.org/docs/master/generated/torch.cuda.graph.html#torch.cuda.graph) and [`torch.cuda.make_graphed_callables`](https://pytorch.org/docs/master/generated/torch.cuda.make_graphed_callables.html#torch.cuda.make_graphed_callables).", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "[`torch.cuda.graph`](https://pytorch.org/docs/master/generated/torch.cuda.graph.html#torch.cuda.graph) is a simple, versatile context manager that captures CUDA work in its context. Before capture, warm up the workload to be captured by running a few eager iterations. Warmup must occur on a side stream. Because the graph reads from and writes to the same memory addresses in every replay, you must maintain long-lived references to tensors that hold input and output data during capture. To run the graph on new input data, copy new data to the capture\u2019s input tensor(s), replay the graph, then read the new output from the capture\u2019s output tensor(s).\n\nIf the entire network is capture safe, one can capture and replay the whole network as in the following example.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "```python\nN, D_in, H, D_out = 640, 4096, 2048, 1024\nmodel = torch.nn.Sequential(torch.nn.Linear(D_in, H),\n torch.nn.Dropout(p=0.2),\n torch.nn.Linear(H, D_out),\n torch.nn.Dropout(p=0.1)).cuda()\nloss_fn = torch.nn.MSELoss()\noptimizer = torch.optim.SGD(model.parameters(), lr=0.1)\n\n# Placeholders used for capture\nstatic_input = torch.randn(N, D_in, device='cuda')\nstatic_target = torch.randn(N, D_out, device='cuda')\n\n# warmup\n# Uses static_input and static_target here for convenience,\n# but in a real setting, because the warmup includes optimizer.step()\n# you must use a few batches of real data.\ns = torch.cuda.Stream()\ns.wait_stream(torch.cuda.current_stream())\nwith torch.cuda.stream(s):\n for i in range(3):\n optimizer.zero_grad(set_to_none=True)\n y_pred = model(static_input)\n loss = loss_fn(y_pred, static_target)\n loss.backward()\n optimizer.step()\ntorch.cuda.current_stream().wait_stream(s)", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
{"page_content": "# capture\ng = torch.cuda.CUDAGraph()\n# Sets grads to None before capture, so backward() will create\n# .grad attributes with allocations from the graph's private pool\noptimizer.zero_grad(set_to_none=True)\nwith torch.cuda.graph(g):\n static_y_pred = model(static_input)\n static_loss = loss_fn(static_y_pred, static_target)\n static_loss.backward()\n optimizer.step()\n\nreal_inputs = [torch.rand_like(static_input) for _ in range(10)]\nreal_targets = [torch.rand_like(static_target) for _ in range(10)]", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "for data, target in zip(real_inputs, real_targets):\n # Fills the graph's input memory with new data to compute on\n static_input.copy_(data)\n static_target.copy_(target)\n # replay() includes forward, backward, and step.\n # You don't even need to call optimizer.zero_grad() between iterations\n # because the captured backward refills static .grad tensors in place.\n g.replay()\n # Params have been updated. static_y_pred, static_loss, and .grad", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "# attributes hold values from computing on this iteration's data.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "If some of your network is unsafe to capture (e.g., due to dynamic control flow, dynamic shapes, CPU syncs, or essential CPU-side logic), you can run the unsafe part(s) eagerly and use [`torch.cuda.make_graphed_callables`](https://pytorch.org/docs/master/generated/torch.cuda.make_graphed_callables.html#torch.cuda.make_graphed_callables) to graph only the capture-safe part(s). This is demonstrated next.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "[`make_graphed_callables`](https://pytorch.org/docs/master/generated/torch.cuda.make_graphed_callables.html#torch.cuda.make_graphed_callables) accepts callables (functions or [`nn.Module`](https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module) and returns graphed versions. By default, callables returned by [`make_graphed_callables`](https://pytorch.org/docs/master/generated/torch.cuda.make_graphed_callables.html#torch.cuda.make_graphed_callables) are autograd-aware, and can be used", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "in the training loop as direct replacements for the functions or [`nn.Module`](https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module) you passed. [`make_graphed_callables`](https://pytorch.org/docs/master/generated/torch.cuda.make_graphed_callables.html#torch.cuda.make_graphed_callables) internally creates [`CUDAGraph`](https://pytorch.org/docs/master/generated/torch.cuda.CUDAGraph.html#torch.cuda.CUDAGraph) objects, runs warm up iterations, and maintains static inputs and outputs", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "as needed. Therefore, (unlike with [`torch.cuda.graph`](https://pytorch.org/docs/master/generated/torch.cuda.graph.html#torch.cuda.graph)) you don\u2019t need to handle those manually.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "In the following example, data-dependent dynamic control flow means the network isn\u2019t capturable end-to-end, but [`make_graphed_callables`](https://pytorch.org/docs/master/generated/torch.cuda.make_graphed_callables.html#torch.cuda.make_graphed_callables)() lets us capture and run graph-safe sections as graphs regardless:\n\n\n```python\nN, D_in, H, D_out = 640, 4096, 2048, 1024\n\nmodule1 = torch.nn.Linear(D_in, H).cuda()\nmodule2 = torch.nn.Linear(H, D_out).cuda()\nmodule3 = torch.nn.Linear(H, D_out).cuda()", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "loss_fn = torch.nn.MSELoss()\noptimizer = torch.optim.SGD(chain(module1.parameters(),\n module2.parameters(),\n module3.parameters()),\n lr=0.1)\n\n# Sample inputs used for capture\n# requires_grad state of sample inputs must match\n# requires_grad state of real inputs each callable will see.\nx = torch.randn(N, D_in, device='cuda')\nh = torch.randn(N, H, device='cuda', requires_grad=True)", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "module1 = torch.cuda.make_graphed_callables(module1, (x,))\nmodule2 = torch.cuda.make_graphed_callables(module2, (h,))\nmodule3 = torch.cuda.make_graphed_callables(module3, (h,))\n\nreal_inputs = [torch.rand_like(x) for _ in range(10)]\nreal_targets = [torch.randn(N, D_out, device=\"cuda\") for _ in range(10)]\n\nfor data, target in zip(real_inputs, real_targets):\n optimizer.zero_grad(set_to_none=True)\n\n tmp = module1(data) # forward ops run as a graph", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "if tmp.sum().item() > 0:\n tmp = module2(tmp) # forward ops run as a graph\n else:\n tmp = module3(tmp) # forward ops run as a graph\n\n loss = loss_fn(tmp, target)\n # module2's or module3's (whichever was chosen) backward ops,\n # as well as module1's backward ops, run as graphs\n loss.backward()\n optimizer.step()", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "# Example use cases", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "MLPerf v1.0 training workloads\n\nThe PyTorch CUDA graphs functionality was instrumental in scaling NVIDIA\u2019s MLPerf training v1.0 workloads (implemented in PyTorch) to over 4000 GPUs, setting new [records across the board](https://blogs.nvidia.com/blog/2021/06/30/mlperf-ai-training-partners/). We illustrate below two MLPerf workloads where the most significant gains were observed with the use of CUDA graphs, yielding up to ~1.7x speedup.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "| | Number of GPUs | Speedup from CUDA-graphs |\n|-----------------------------|----------------:|-------------------------:|\n| Mask R-CNN | 272 | 1.70\u00d7 |\n| BERT | 4096 | 1.12\u00d7 |\n\nTable 1. MLPerf training v1.0 performance improvement with PyTorch CUDA graph.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "Mask R-CNN", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "Deep learning frameworks use GPUs to accelerate computations, but a significant amount of code still runs on CPU cores. CPU cores process meta-data like tensor shapes in order to prepare arguments needed to launch GPU kernels. Processing meta-data is a fixed cost while the cost of the computational work done by the GPUs is positively correlated with batch size. For large batch sizes, CPU overhead is a negligible percentage of total run time cost, but at small batch sizes CPU overhead can become larger than", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "GPU run time. When that happens, GPUs go idle between kernel calls. This issue can be identified on an NSight timeline plot in Figure 3. The plot below shows the \u201cbackbone\u201d portion of Mask R-CNN with per-gpu batch size of 1 before graphing. The green portion shows CPU load while the blue portion shows GPU load. In this profile we see that the CPU is maxed out at 100% load while GPU is idle most of the time, there is a lot of empty space between GPU kernels.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n Figure 3: NSight timeline plot of Mask R-CNN\n
", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "CUDA graphs can automatically eliminate CPU overhead when tensor shapes are static. A complete graph of all the kernel calls is captured during the first step, in subsequent steps the entire graph is launched with a single op, eliminating all the CPU overhead, as observed in Figure 4..", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n Figure 4: CUDA graphs optimization\n
", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "With graphing, we see that the GPU kernels are tightly packed and GPU utilization remains high. The graphed portion now runs in 6 ms instead of 31ms, a speedup of 5x. We did not graph the entire model, mostly just the resnet backbone, which resulted in an overall speedup of ~1.7x.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "In order to increase the scope of the graph, we made some changes in the software stack to eliminate some of the CPU-GPU synchronization points. In MLPerf v1.0, this work included changing the implementation of torch.randperm function to use CUB instead of Thrust because the latter is a synchronous C++ template library. These improvements are available in the latest NGC container.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "BERT", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "Similarly, by graph capturing the model, we eliminate CPU overhead and accompanying synchronization overhead. CUDA graphs implementation results in a 1.12x performance boost for our max-scale BERT configuration. To maximize the benefits from CUDA graphs, it is important to keep the scope of the graph as large as possible. To achieve this, we modified the model script to remove CPU-GPU synchronizations during the execution such that the full model can be graph captured. Furthermore, we also made sure that", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "the tensor sizes during the execution are static within the scope of the graph. For instance, in BERT, only a specific subset of total tokens contribute to loss function, determined by a pre-generated mask tensor. Extracting the indices of valid tokens from this mask, and using these indices to gather the tokens that contribute to the loss, results in a tensor with a dynamic shape, i.e. with shape that is not constant across iterations. In order to make sure tensor sizes are static, instead of using the", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "dynamic-shape tensors in the loss computation, we used static shape tensors where a mask is used to indicate which elements are valid. As a result, all tensor shapes are static. Dynamic shapes also require CPU-GPU synchronization since it has to involve the framework\u2019s memory management on the CPU side. With static-only shapes, no CPU-GPU synchronizations are necessary. This is shown in Figure 5.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "\n\t
\n\t
\n\tFigure 5. By using a fixed size tensor and a boolean mask as described in the text, we are able to eliminate CPU synchronizations needed for dynamic sized tensors \n
", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "CUDA graphs in NVIDIA DL examples collection", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "Single GPU use cases can also benefit from using CUDA Graphs. This is particularly true for workloads launching many short kernels with small batches. A good example is training and inference for recommender systems. Below we present preliminary benchmark results for NVIDIA's implementation of the Deep Learning Recommendation Model (DLRM) from our Deep Learning Examples collection. Using CUDA graphs for this workload provides significant speedups for both training and inference. The effect is particularly", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "visible when using very small batch sizes, where CPU overheads are more pronounced.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "CUDA graphs are being actively integrated into other PyTorch NGC model scripts and the NVIDIA Github deep learning examples. Stay tuned for more examples on how to use it.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "\n\t
\n
\n\n\t
\n
", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "Figure 6: CUDA graphs optimization for the DLRM model.\n
", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "# Call to action: CUDA Graphs in PyTorch v1.10", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "CUDA graphs can provide substantial benefits for workloads that comprise many small GPU kernels and hence bogged down by CPU launch overheads. This has been demonstrated in our MLPerf efforts, optimizing PyTorch models. Many of these optimizations, including CUDA graphs, have or will eventually be integrated into our PyTorch NGC model scripts [collection](https://ngc.nvidia.com/catalog/collections?orderBy=scoreDESC&pageNumber=0&query=pytorch&quickFilter=&filters=) and the NVIDIA [Github deep learning", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "examples](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/). For now, check out our open-source MLPerf training v1.0 [implementation](https://github.com/mlcommons/training_results_v1.0/tree/master/NVIDIA) which could serve as a good starting point to see CUDA graph in action. Alternatively, try the PyTorch CUDA graphs API on your own workloads.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "We thank many NVIDIAN\u2019s and Facebook engineers for their discussions and suggestions: \n[Karthik Mandakolathur US](mailto:karthik@nvidia.com),\n[Tomasz Grel](mailto:tgrel@nvidia.com), \n[PLJoey Conway](mailto:jconway@nvidia.com), \n[Arslan Zulfiqar US](mailto:azulfiqar@nvidia.com)", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "Authors bios\n\n[**Vinh Nguyen**](mailto:vinhn@nvidia.com)\n*DL Engineer, NVIDIA*\n\nVinh is a Deep learning engineer and data scientist, having published more than 50 scientific articles attracting more than 2500 citations. At NVIDIA, his work spans a wide range of deep learning and AI applications, including speech, language and vision processing, and recommender systems.\n\n[**Michael Carilli**](mailto:mcarilli@nvidia.com)\n*Senior Developer Technology Engineer, NVIDIA*", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "Michael worked at the Air Force Research Laboratory optimizing CFD code for modern parallel architectures. He holds a PhD in computational physics from the University of California, Santa Barbara. A member of the PyTorch team, he focuses on making GPU training fast, numerically stable, and easy(er) for internal teams, external customers, and Pytorch community users.\n\n[**Sukru Burc Eryilmaz**](mailto:seryilmaz@nvidia.com)\n*Senior Architect in Dev Arch, NVIDIA*", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "Sukru received his PhD from Stanford University, and B.S from Bilkent University. He currently works on improving the end-to-end performance of neural network training both at single-node scale and supercomputer scale. \n\n[**Vartika Singh**](mailto:vartikas@nvidia.com)\n*Tech Partner Lead for DL Frameworks and Libraries, NVIDIA*", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "Vartika has led teams working in confluence of cloud and distributed computing, scaling and AI, influencing the design and strategy of major corporations. She currently works with the major frameworks and compiler organizations and developers within and outside NVIDIA, to help the design to work efficiently and optimally on NVIDIA hardware.\n\n[**Michelle Lin**](mailto:miclin@nvidia.com)\n*Product Intern, NVIDIA*", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "Michelle is currently pursuing an undergraduate degree in Computer Science and Business Administration at UC Berkeley. She is currently managing execution of projects such as conducting market research and creating marketing assets for Magnum IO.\n\n[**Natalia Gimelshein**](mailto:ngimel@fb.com)\n*Applied Research Scientist, Facebook*", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "Natalia Gimelshein worked on GPU performance optimization for deep learning workloads at NVIDIA and Facebook. She is currently a member of the PyTorch core team, working with partners to seamlessly support new software and hardware features.\n\n[**Alban Desmaison**](mailto:albandes@fb.com)\n*Research Engineer, Facebook*", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "Alban studied engineering and did a PhD in Machine Learning and Optimization, during which he was an OSS contributor to PyTorch prior to joining Facebook. His main responsibilities are maintaining some core library and features (autograd, optim, nn) and working on making PyTorch better in general.\n\n[**Edward Yang**](mailto:ezyang@fb.com)\n*Research Engineer, Facebook*", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "Edward studied CS at MIT and then Stanford before starting at Facebook. He is a part of the PyTorch core team and is one of the leading contributors to PyTorch.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch 1.10 Release, including CUDA Graphs APIs, Frontend and Compiler Improvements'\nauthor: Team PyTorch\n---\n\nWe are excited to announce the release of PyTorch 1.10. This release is composed of over 3,400 commits since 1.9, made by 426 contributors. We want to sincerely thank our community for continuously improving PyTorch.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch 1.10 updates are focused on improving training and performance of PyTorch, and developer usability. The full release notes are available [here](https://github.com/pytorch/pytorch/releases/tag/v1.10.0). Highlights include:\n1. CUDA Graphs APIs are integrated to reduce CPU overheads for CUDA workloads.\n2. Several frontend APIs such as FX, torch.special, and nn.Module Parametrization, have moved from beta to stable. \n3. Support for automatic fusion in JIT Compiler expands to CPUs in addition to GPUs.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "4. Android NNAPI support is now available in beta.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "Along with 1.10, we are also releasing major updates to the PyTorch libraries, which you can read about in [this blog post](https://pytorch.org/blog/pytorch-1.10-new-library-releases/).\n\n\n\n# Frontend APIs", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "(Stable) Python code transformations with FX\n\nFX provides a Pythonic platform for transforming and lowering PyTorch programs. It is a toolkit for pass writers to facilitate Python-to-Python transformation of functions and nn.Module instances. This toolkit aims to support a subset of Python language semantics\u2014rather than the whole Python language\u2014to facilitate ease of implementation of transforms. With 1.10, FX is moving to stable.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "You can learn more about FX in the [official documentation](https://pytorch.org/docs/master/fx.html) and [GitHub examples](https://github.com/pytorch/examples/tree/master/fx) of program transformations implemented using ```torch.fx```.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "(Stable) *torch.special* \nA ```torch.special module```, analogous to [SciPy\u2019s special module](https://docs.scipy.org/doc/scipy/reference/special.html), is now available in stable. The module has 30 operations, including gamma, Bessel, and (Gauss) error functions. \n\nRefer to this [documentation](https://pytorch.org/docs/master/special.html) for more details.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "(Stable) nn.Module Parametrization \n```nn.Module``` parametrizaton, a feature that allows users to parametrize any parameter or buffer of an ```nn.Module``` without modifying the ```nn.Module``` itself, is available in stable. This release adds weight normalization (```weight_norm```), orthogonal parametrization (matrix constraints and part of pruning) and more flexibility when creating your own parametrization.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "Refer to this [tutorial](https://pytorch.org/tutorials/intermediate/parametrizations.html) and the general [documentation](https://pytorch.org/docs/master/generated/torch.nn.utils.parametrizations.spectral_norm.html?highlight=parametrize) for more details.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) CUDA Graphs APIs Integration\nPyTorch now integrates CUDA Graphs APIs to reduce CPU overheads for CUDA workloads.\n\nCUDA Graphs greatly reduce the CPU overhead for CPU-bound cuda workloads and thus improve performance by increasing GPU utilization. For distributed workloads, CUDA Graphs also reduce jitter, and since parallel workloads have to wait for the slowest worker, reducing jitter improves overall parallel efficiency.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
+{"page_content": "for data, target in zip(real_inputs, real_targets):\n # Fills the graph's input memory with new data to compute on\n static_input.copy_(data)\n static_target.copy_(target)\n # replay() includes forward, backward, and step.\n # You don't even need to call optimizer.zero_grad() between iterations\n # because the captured backward refills static .grad tensors in place.\n g.replay()\n # Params have been updated. static_y_pred, static_loss, and .grad\n # attributes hold values from computing on this iteration's data.\n```\n\nIf some of your network is unsafe to capture (e.g., due to dynamic control flow, dynamic shapes, CPU syncs, or essential CPU-side logic), you can run the unsafe part(s) eagerly and use [`torch.cuda.make_graphed_callables`](https://pytorch.org/docs/master/generated/torch.cuda.make_graphed_callables.html#torch.cuda.make_graphed_callables) to graph only the capture-safe part(s). This is demonstrated next.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "[`make_graphed_callables`](https://pytorch.org/docs/master/generated/torch.cuda.make_graphed_callables.html#torch.cuda.make_graphed_callables) accepts callables (functions or [`nn.Module`](https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module) and returns graphed versions. By default, callables returned by [`make_graphed_callables`](https://pytorch.org/docs/master/generated/torch.cuda.make_graphed_callables.html#torch.cuda.make_graphed_callables) are autograd-aware, and can be used in the training loop as direct replacements for the functions or [`nn.Module`](https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module) you passed. [`make_graphed_callables`](https://pytorch.org/docs/master/generated/torch.cuda.make_graphed_callables.html#torch.cuda.make_graphed_callables) internally creates [`CUDAGraph`](https://pytorch.org/docs/master/generated/torch.cuda.CUDAGraph.html#torch.cuda.CUDAGraph) objects, runs warm up iterations, and maintains static inputs and outputs as needed. Therefore, (unlike with [`torch.cuda.graph`](https://pytorch.org/docs/master/generated/torch.cuda.graph.html#torch.cuda.graph)) you don\u2019t need to handle those manually.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "In the following example, data-dependent dynamic control flow means the network isn\u2019t capturable end-to-end, but [`make_graphed_callables`](https://pytorch.org/docs/master/generated/torch.cuda.make_graphed_callables.html#torch.cuda.make_graphed_callables)() lets us capture and run graph-safe sections as graphs regardless:\n\n\n```python\nN, D_in, H, D_out = 640, 4096, 2048, 1024\n\nmodule1 = torch.nn.Linear(D_in, H).cuda()\nmodule2 = torch.nn.Linear(H, D_out).cuda()\nmodule3 = torch.nn.Linear(H, D_out).cuda()\n\nloss_fn = torch.nn.MSELoss()\noptimizer = torch.optim.SGD(chain(module1.parameters(),\n module2.parameters(),\n module3.parameters()),\n lr=0.1)\n\n# Sample inputs used for capture\n# requires_grad state of sample inputs must match\n# requires_grad state of real inputs each callable will see.\nx = torch.randn(N, D_in, device='cuda')\nh = torch.randn(N, H, device='cuda', requires_grad=True)", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "module1 = torch.cuda.make_graphed_callables(module1, (x,))\nmodule2 = torch.cuda.make_graphed_callables(module2, (h,))\nmodule3 = torch.cuda.make_graphed_callables(module3, (h,))\n\nreal_inputs = [torch.rand_like(x) for _ in range(10)]\nreal_targets = [torch.randn(N, D_out, device=\"cuda\") for _ in range(10)]\n\nfor data, target in zip(real_inputs, real_targets):\n optimizer.zero_grad(set_to_none=True)\n\n tmp = module1(data) # forward ops run as a graph\n\n if tmp.sum().item() > 0:\n tmp = module2(tmp) # forward ops run as a graph\n else:\n tmp = module3(tmp) # forward ops run as a graph\n\n loss = loss_fn(tmp, target)\n # module2's or module3's (whichever was chosen) backward ops,\n # as well as module1's backward ops, run as graphs\n loss.backward()\n optimizer.step()\n```\n\n# Example use cases\n## MLPerf v1.0 training workloads", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "The PyTorch CUDA graphs functionality was instrumental in scaling NVIDIA\u2019s MLPerf training v1.0 workloads (implemented in PyTorch) to over 4000 GPUs, setting new [records across the board](https://blogs.nvidia.com/blog/2021/06/30/mlperf-ai-training-partners/). We illustrate below two MLPerf workloads where the most significant gains were observed with the use of CUDA graphs, yielding up to ~1.7x speedup.\n\n| | Number of GPUs | Speedup from CUDA-graphs |\n|-----------------------------|----------------:|-------------------------:|\n| Mask R-CNN | 272 | 1.70\u00d7 |\n| BERT | 4096 | 1.12\u00d7 |\n\nTable 1. MLPerf training v1.0 performance improvement with PyTorch CUDA graph.\n\n### Mask R-CNN", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "### Mask R-CNN\n\nDeep learning frameworks use GPUs to accelerate computations, but a significant amount of code still runs on CPU cores. CPU cores process meta-data like tensor shapes in order to prepare arguments needed to launch GPU kernels. Processing meta-data is a fixed cost while the cost of the computational work done by the GPUs is positively correlated with batch size. For large batch sizes, CPU overhead is a negligible percentage of total run time cost, but at small batch sizes CPU overhead can become larger than GPU run time. When that happens, GPUs go idle between kernel calls. This issue can be identified on an NSight timeline plot in Figure 3. The plot below shows the \u201cbackbone\u201d portion of Mask R-CNN with per-gpu batch size of 1 before graphing. The green portion shows CPU load while the blue portion shows GPU load. In this profile we see that the CPU is maxed out at 100% load while GPU is idle most of the time, there is a lot of empty space between GPU kernels.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
\n Figure 3: NSight timeline plot of Mask R-CNN\n
\n\nCUDA graphs can automatically eliminate CPU overhead when tensor shapes are static. A complete graph of all the kernel calls is captured during the first step, in subsequent steps the entire graph is launched with a single op, eliminating all the CPU overhead, as observed in Figure 4.. \n\n\n
\n
\n Figure 4: CUDA graphs optimization\n
", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "With graphing, we see that the GPU kernels are tightly packed and GPU utilization remains high. The graphed portion now runs in 6 ms instead of 31ms, a speedup of 5x. We did not graph the entire model, mostly just the resnet backbone, which resulted in an overall speedup of ~1.7x.\nIn order to increase the scope of the graph, we made some changes in the software stack to eliminate some of the CPU-GPU synchronization points. In MLPerf v1.0, this work included changing the implementation of torch.randperm function to use CUB instead of Thrust because the latter is a synchronous C++ template library. These improvements are available in the latest NGC container.\n\n\n### BERT", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "Similarly, by graph capturing the model, we eliminate CPU overhead and accompanying synchronization overhead. CUDA graphs implementation results in a 1.12x performance boost for our max-scale BERT configuration. To maximize the benefits from CUDA graphs, it is important to keep the scope of the graph as large as possible. To achieve this, we modified the model script to remove CPU-GPU synchronizations during the execution such that the full model can be graph captured. Furthermore, we also made sure that the tensor sizes during the execution are static within the scope of the graph. For instance, in BERT, only a specific subset of total tokens contribute to loss function, determined by a pre-generated mask tensor. Extracting the indices of valid tokens from this mask, and using these indices to gather the tokens that contribute to the loss, results in a tensor with a dynamic shape, i.e. with shape that is not constant across iterations. In order to make sure tensor sizes are static, instead of using the dynamic-shape tensors in the loss computation, we used static shape tensors where a mask is used to indicate which elements are valid. As a result, all tensor shapes are static. Dynamic shapes also require CPU-GPU synchronization since it has to involve the framework\u2019s memory management on the CPU side. With static-only shapes, no CPU-GPU synchronizations are necessary. This is shown in Figure 5.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "\n\t
\n\t
\n\tFigure 5. By using a fixed size tensor and a boolean mask as described in the text, we are able to eliminate CPU synchronizations needed for dynamic sized tensors \n
\n\n\n## CUDA graphs in NVIDIA DL examples collection\n\nSingle GPU use cases can also benefit from using CUDA Graphs. This is particularly true for workloads launching many short kernels with small batches. A good example is training and inference for recommender systems. Below we present preliminary benchmark results for NVIDIA's implementation of the Deep Learning Recommendation Model (DLRM) from our Deep Learning Examples collection. Using CUDA graphs for this workload provides significant speedups for both training and inference. The effect is particularly visible when using very small batch sizes, where CPU overheads are more pronounced.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "CUDA graphs are being actively integrated into other PyTorch NGC model scripts and the NVIDIA Github deep learning examples. Stay tuned for more examples on how to use it.\n\n\n\n\t
\n
\n\n\t
\n
\n\tFigure 6: CUDA graphs optimization for the DLRM model.\n
\n\n\n# Call to action: CUDA Graphs in PyTorch v1.10", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "CUDA graphs can provide substantial benefits for workloads that comprise many small GPU kernels and hence bogged down by CPU launch overheads. This has been demonstrated in our MLPerf efforts, optimizing PyTorch models. Many of these optimizations, including CUDA graphs, have or will eventually be integrated into our PyTorch NGC model scripts [collection](https://ngc.nvidia.com/catalog/collections?orderBy=scoreDESC&pageNumber=0&query=pytorch&quickFilter=&filters=) and the NVIDIA [Github deep learning examples](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/). For now, check out our open-source MLPerf training v1.0 [implementation](https://github.com/mlcommons/training_results_v1.0/tree/master/NVIDIA) which could serve as a good starting point to see CUDA graph in action. Alternatively, try the PyTorch CUDA graphs API on your own workloads.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "We thank many NVIDIAN\u2019s and Facebook engineers for their discussions and suggestions: \n[Karthik Mandakolathur US](mailto:karthik@nvidia.com),\n[Tomasz Grel](mailto:tgrel@nvidia.com), \n[PLJoey Conway](mailto:jconway@nvidia.com), \n[Arslan Zulfiqar US](mailto:azulfiqar@nvidia.com)\n\n## Authors bios\n\n[**Vinh Nguyen**](mailto:vinhn@nvidia.com)\n*DL Engineer, NVIDIA*\n\nVinh is a Deep learning engineer and data scientist, having published more than 50 scientific articles attracting more than 2500 citations. At NVIDIA, his work spans a wide range of deep learning and AI applications, including speech, language and vision processing, and recommender systems.\n\n[**Michael Carilli**](mailto:mcarilli@nvidia.com)\n*Senior Developer Technology Engineer, NVIDIA*", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "Michael worked at the Air Force Research Laboratory optimizing CFD code for modern parallel architectures. He holds a PhD in computational physics from the University of California, Santa Barbara. A member of the PyTorch team, he focuses on making GPU training fast, numerically stable, and easy(er) for internal teams, external customers, and Pytorch community users.\n\n[**Sukru Burc Eryilmaz**](mailto:seryilmaz@nvidia.com)\n*Senior Architect in Dev Arch, NVIDIA*\n\nSukru received his PhD from Stanford University, and B.S from Bilkent University. He currently works on improving the end-to-end performance of neural network training both at single-node scale and supercomputer scale. \n\n[**Vartika Singh**](mailto:vartikas@nvidia.com)\n*Tech Partner Lead for DL Frameworks and Libraries, NVIDIA*", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "Vartika has led teams working in confluence of cloud and distributed computing, scaling and AI, influencing the design and strategy of major corporations. She currently works with the major frameworks and compiler organizations and developers within and outside NVIDIA, to help the design to work efficiently and optimally on NVIDIA hardware.\n\n[**Michelle Lin**](mailto:miclin@nvidia.com)\n*Product Intern, NVIDIA*\n\nMichelle is currently pursuing an undergraduate degree in Computer Science and Business Administration at UC Berkeley. She is currently managing execution of projects such as conducting market research and creating marketing assets for Magnum IO.\n\n[**Natalia Gimelshein**](mailto:ngimel@fb.com)\n*Applied Research Scientist, Facebook*\n\nNatalia Gimelshein worked on GPU performance optimization for deep learning workloads at NVIDIA and Facebook. She is currently a member of the PyTorch core team, working with partners to seamlessly support new software and hardware features.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "[**Alban Desmaison**](mailto:albandes@fb.com)\n*Research Engineer, Facebook*\n\nAlban studied engineering and did a PhD in Machine Learning and Optimization, during which he was an OSS contributor to PyTorch prior to joining Facebook. His main responsibilities are maintaining some core library and features (autograd, optim, nn) and working on making PyTorch better in general.\n\n[**Edward Yang**](mailto:ezyang@fb.com)\n*Research Engineer, Facebook*\n\nEdward studied CS at MIT and then Stanford before starting at Facebook. He is a part of the PyTorch core team and is one of the leading contributors to PyTorch.", "metadata": {"source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch 1.10 Release, including CUDA Graphs APIs, Frontend and Compiler Improvements'\nauthor: Team PyTorch\n---\n\nWe are excited to announce the release of PyTorch 1.10. This release is composed of over 3,400 commits since 1.9, made by 426 contributors. We want to sincerely thank our community for continuously improving PyTorch. \n\nPyTorch 1.10 updates are focused on improving training and performance of PyTorch, and developer usability. The full release notes are available [here](https://github.com/pytorch/pytorch/releases/tag/v1.10.0). Highlights include:\n1. CUDA Graphs APIs are integrated to reduce CPU overheads for CUDA workloads.\n2. Several frontend APIs such as FX, torch.special, and nn.Module Parametrization, have moved from beta to stable. \n3. Support for automatic fusion in JIT Compiler expands to CPUs in addition to GPUs.\n4. Android NNAPI support is now available in beta.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
+{"page_content": "Along with 1.10, we are also releasing major updates to the PyTorch libraries, which you can read about in [this blog post](https://pytorch.org/blog/pytorch-1.10-new-library-releases/).\n\n\n\n# Frontend APIs\n\n### (Stable) Python code transformations with FX\n\nFX provides a Pythonic platform for transforming and lowering PyTorch programs. It is a toolkit for pass writers to facilitate Python-to-Python transformation of functions and nn.Module instances. This toolkit aims to support a subset of Python language semantics\u2014rather than the whole Python language\u2014to facilitate ease of implementation of transforms. With 1.10, FX is moving to stable. \n\nYou can learn more about FX in the [official documentation](https://pytorch.org/docs/master/fx.html) and [GitHub examples](https://github.com/pytorch/examples/tree/master/fx) of program transformations implemented using ```torch.fx```.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
+{"page_content": "### (Stable) *torch.special* \nA ```torch.special module```, analogous to [SciPy\u2019s special module](https://docs.scipy.org/doc/scipy/reference/special.html), is now available in stable. The module has 30 operations, including gamma, Bessel, and (Gauss) error functions. \n\nRefer to this [documentation](https://pytorch.org/docs/master/special.html) for more details.\n\n### (Stable) nn.Module Parametrization \n```nn.Module``` parametrizaton, a feature that allows users to parametrize any parameter or buffer of an ```nn.Module``` without modifying the ```nn.Module``` itself, is available in stable. This release adds weight normalization (```weight_norm```), orthogonal parametrization (matrix constraints and part of pruning) and more flexibility when creating your own parametrization.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
+{"page_content": "Refer to this [tutorial](https://pytorch.org/tutorials/intermediate/parametrizations.html) and the general [documentation](https://pytorch.org/docs/master/generated/torch.nn.utils.parametrizations.spectral_norm.html?highlight=parametrize) for more details.\n\n### (Beta) CUDA Graphs APIs Integration\nPyTorch now integrates CUDA Graphs APIs to reduce CPU overheads for CUDA workloads.\n\nCUDA Graphs greatly reduce the CPU overhead for CPU-bound cuda workloads and thus improve performance by increasing GPU utilization. For distributed workloads, CUDA Graphs also reduce jitter, and since parallel workloads have to wait for the slowest worker, reducing jitter improves overall parallel efficiency.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
{"page_content": "Integration allows seamless interop between the parts of the network captured by cuda graphs, and parts of the network that cannot be captured due to graph limitations. \n \nRead the [note](https://pytorch.org/docs/master/notes/cuda.html#cuda-graphs) for more details and examples, and refer to the general [documentation](https://pytorch.org/docs/master/generated/torch.cuda.CUDAGraph.html#torch.cuda.CUDAGraph) for additional information.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "[Beta] Conjugate View", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch\u2019s conjugation for complex tensors ([torch.conj()](https://pytorch.org/docs/1.10.0/generated/torch.conj.html?highlight=conj#torch.conj)) is now a constant time operation, and returns a view of the input tensor with a conjugate bit set as can be seen by calling [torch.is_conj()](https://pytorch.org/docs/1.10.0/generated/torch.is_conj.html?highlight=is_conj#torch.is_conj) . This has already been leveraged in various other PyTorch operations like matrix multiplication, dot product etc., to fuse", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "conjugation with the operation leading to significant performance gain and memory savings on both CPU and CUDA.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "# Distributed Training", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "Distributed Training Releases Now in Stable \nIn 1.10, there are a number of features that are moving from beta to stable in the distributed package:\n* **(Stable) Remote Module**: This feature allows users to operate a module on a remote worker like using a local module, where the RPCs are transparent to the user. Refer to this [documentation](https://pytorch.org/docs/master/rpc.html#remotemodule) for more details.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "* **(Stable) DDP Communication Hook**: This feature allows users to override how DDP synchronizes gradients across processes. Refer to this [documentation](https://pytorch.org/docs/master/rpc.html#remotemodule) for more details.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "* **(Stable) ZeroRedundancyOptimizer**: This feature can be used in conjunction with DistributedDataParallel to reduce the size of per-process optimizer states. With this stable release, it now can handle uneven inputs to different data-parallel workers. Check out this [tutorial](https://pytorch.org/tutorials/advanced/generic_join.html). We also improved the parameter partition algorithm to better balance memory and computation overhead across processes. Refer to this", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "[documentation](https://pytorch.org/docs/master/distributed.optim.html) and this [tutorial](https://pytorch.org/tutorials/recipes/zero_redundancy_optimizer.html) to learn more.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "# Performance Optimization and Tooling", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "[Beta] Profile-directed typing in TorchScript \nTorchScript has a hard requirement for source code to have type annotations in order for compilation to be successful. For a long time, it was only possible to add missing or incorrect type annotations through trial and error (i.e., by fixing the type-checking errors generated by torch.jit.script one by one), which was inefficient and time consuming.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "Now, we have enabled profile directed typing for torch.jit.script by leveraging existing tools like MonkeyType, which makes the process much easier, faster, and more efficient. For more details, refer to the [documentation](https://pytorch.org/docs/1.9.0/jit.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) CPU Fusion \nIn PyTorch 1.10, we've added an LLVM-based JIT compiler for CPUs that can fuse together sequences of `torch` library calls to improve performance. While we've had this capability for some time on GPUs, this release is the first time we've brought compilation to the CPU. \nYou can check out a few performance results for yourself in this [Colab notebook](https://colab.research.google.com/drive/1xaH-L0XjsxUcS15GG220mtyrvIgDoZl6?usp=sharing).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) PyTorch Profiler \nThe objective of PyTorch Profiler is to target the execution steps that are the most costly in time and/or memory, and visualize the workload distribution between GPUs and CPUs. PyTorch 1.10 includes the following key features:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "* **Enhanced Memory View**: This helps you understand your memory usage better. This tool will help you avoid Out of Memory errors by showing active memory allocations at various points of your program run.\n* **Enhanced Automated Recommendations**: This helps provide automated performance recommendations to help optimize your model. The tools recommend changes to batch size, TensorCore, memory reduction technologies, etc.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "* **Enhanced Kernel View**: Additional columns show grid and block sizes as well as shared memory usage and registers per thread.\n* **Distributed Training**: Gloo is now supported for distributed training jobs.\n* **Correlate Operators in the Forward & Backward Pass**: This helps map the operators found in the forward pass to the backward pass, and vice versa, in a trace view.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "* **TensorCore**: This tool shows the Tensor Core (TC) usage and provides recommendations for data scientists and framework developers.\n* **NVTX**: Support for NVTX markers was ported from the legacy autograd profiler.\n* **Support for profiling on mobile devices**: The PyTorch profiler now has better integration with TorchScript and mobile backends, enabling trace collection for mobile workloads.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "Refer to this [documentation](https://pytorch.org/docs/stable/profiler.html) for details. Check out this [tutorial](https://pytorch.org/tutorials/recipes/recipes/profiler_recipe.html) to learn how to get started with this feature. \n\n# PyTorch Mobile", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "(Beta) Android NNAPI Support in Beta \nLast year we [released prototype support](https://medium.com/pytorch/pytorch-mobile-now-supports-android-nnapi-e2a2aeb74534) for Android\u2019s Neural Networks API (NNAPI). NNAPI allows Android apps to run computationally intensive neural networks on the most powerful and efficient parts of the chips that power mobile phones, including GPUs (Graphics Processing Units) and NPUs (specialized Neural Processing Units).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "Since the prototype we\u2019ve added more op coverage, added support for load-time flexible shapes and ability to run the model on the host for testing. Try out this feature using the [tutorial](https://pytorch.org/tutorials/prototype/nnapi_mobilenetv2.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "Additionally, Transfer Learning steps have been added to Object Detection examples. Check out this [GitHub page](https://github.com/pytorch/android-demo-app/tree/master/ObjectDetection#transfer-learning) to learn more. Please provide your feedback or ask questions on the [forum](https://discuss.pytorch.org/c/mobile/18). You can also check out [this presentation](https://www.youtube.com/watch?v=B-2spa3UCTU) to get an overview.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "Thanks for reading. If you\u2019re interested in these updates and want to join the PyTorch community, we encourage you to join the [discussion forums](https://discuss.pytorch.org/) and [open GitHub issues](https://github.com/pytorch/pytorch/issues). To get the latest news from PyTorch, follow us on [Twitter](https://twitter.com/PyTorch), [Medium](https://medium.com/pytorch), [YouTube](https://www.youtube.com/pytorch), and [LinkedIn](https://www.linkedin.com/company/pytorch). \n\nCheers!\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"Ambient Clinical Intelligence: Generating Medical Reports with PyTorch\"\nauthor: Miguel Del-Agua, Principal Research Scientist, Nuance and Jeremy Jancsary, Senior Principal Research Scientist, Nuance\nfeatured-img: \"\"\n---", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Introduction\n\nComplete and accurate clinical documentation is an essential tool for tracking patient care. It allows for treatment plans to be shared among care teams to aid in continuity of care and ensures a transparent and effective process for reimbursement.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Physicians are responsible for documenting patient care. Traditional clinical documentation methods have resulted in a sub-par patient-provider experience, less time interacting with patients, and decreased work-life balance. A significant amount of physicians\u2019 time is spent in front of the computer doing administrative tasks. As a result, patients are less satisfied with the overall experience, and physicians, who prepare for years studying medicine, cannot practice at the top of their license and are", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "burned out. Every hour physicians provide direct clinical face time to patients results in nearly two additional hours spent on EHR and desk work within the clinic day. Outside office hours, physicians [spend another 1 to 2 hours of personal](https://www.acpjournals.org/doi/10.7326/m16-0961) time each night doing additional computer and other clerical work.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "* [42% of all physicians reported having burnout. \u2013 Medscape](https://www.medscape.com/slideshow/2020-lifestyle-burnout-6012460)\n* [The problem has grown worse due to the pandemic with 64% of U.S. physicians now reporting burnout. - AAFP](https://www.aafp.org/journals/fpm/blogs/inpractice/entry/covid_burnout_survey.html#:~:text=Physician%20burnout%20was%20already%20a,5%2C000%20%E2%80%94%20practice%20in%20the%20U.S.)", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "* [\"Too many bureaucratic tasks e.g., charting and paperwork\" is the leading contribution to burnout, increased computerization ranks 4th.](https://login.medscape.com/login/sso/getlogin?urlCache=aHR0cHM6Ly93d3cubWVkc2NhcGUuY29tL3NsaWRlc2hvdy8yMDIwLWxpZmVzdHlsZS1idXJub3V0LTYwMTI0NjA%3D&ac=401) - Medscape", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "* [75% of U.S. Consumers Wish Their Healthcare Experiences Were More Personalized,](https://www.businesswire.com/news/home/20200218005006/en/75-of-U.S.-Consumers-Wish-Their-Healthcare-Experiences-Were-More-Personalized-Redpoint-Global-Survey-Reveals)- Business Wire", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "* [61% of patients would visit their healthcare provider more often if the communication experience felt more personalized.](https://www.businesswire.com/news/home/20200218005006/en/75-of-U.S.-Consumers-Wish-Their-Healthcare-Experiences-Were-More-Personalized-Redpoint-Global-Survey-Reveals) \u2013 Business Wire", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Physician burnout is one of the primary causes for increased [medical errors](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6175626/), malpractice suits, turnover, and decreased access to care. Burnout leads to an increase in healthcare costs and a decrease in overall patient satisfaction. [Burnout costs the United States $4.6 billion a year.](https://www.nejm.org/doi/full/10.1056/NEJMp2003149)", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "What can we do to bring back trust, joy, and humanity to the delivery of healthcare? A significant portion of the administrative work consists of entering patient data into Electronic Health Records (EHRs) and creating clinical documentation. Clinical documentation is created from information already in the EHR as well as from the patient-provider encounter conversation.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "This article will showcase how the Nuance Dragon Ambient eXperience (DAX), an AI-powered, voice-enabled, ambient clinical intelligence solution, automatically documents patient encounters accurately and efficiently at the point of care and the technologies that enable it.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Nuance DAX enhances the quality of care and patient experience, increases provider efficiency and satisfaction, and improves financial outcomes. It can be used in office and telehealth settings in all ambulatory specialties, including primary and urgent care.\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Natural Language Processing", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Natural Language Processing (NLP) is one of the most challenging fields in Artificial Intelligence (AI). It comprehends a set of algorithms that allow computers to understand or generate the language used by humans. These algorithms can process and analyze vast amounts of natural language data from different sources (either sound or text) to build models that can understand, classify, or even generate natural language as humans would. Like other fields in AI, NLP has significantly progressed thanks to the", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "advent of Deep Learning (DL), which has resulted in models that can obtain results on par with humans in some tasks.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "These advanced NLP techniques are being applied in healthcare. During a typical patient-provider encounter, a conversation ensues where the doctor constructs, through questions and answers, a chronological description of the development of the patient's presenting illness or symptoms. A physician examines the patient and makes clinical decisions to establish a diagnosis and determine a treatment plan. This conversation, and data in the EHR, provide the required information for physicians to generate the", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "clinical documentation, referred to as medical reports.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Two main NLP components play a role in automating the creation of clinical documentation. The first component, Automatic Speech Recognition (ASR), is used to translate speech into text. It takes the audio recording of the encounter and generates a conversation transcription (cf. Figure 2). The second component, Automatic Text Summarization, helps generate summaries from large text documents. This component is responsible for understanding and capturing the nuances and most essential aspects from the", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "transcribed conversation into a final report in narrative form (cf. Figure 3), structured form, or a combination of both.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "We will focus on this second component, Automatic Text Summarization, which is a difficult task with many challenges:", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "* Its performance is tied to the ASR quality from multiple speakers (noisy input).\n* The input is conversational in nature and contains layman's terms.\n* Protected Health Information (PHI) regulations limit medical data access.\n* The information for one output sentence is potentially spread across multiple conversation turns.\n* There is no explicit sentence alignment between input and output.\n* Various medical specialties, encounter types, and EHR systems constitute a broad and complex output space.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "* Physicians have different styles of conducting encounters and have their preferences for medical reports; there is no standard. \n* Standard summarization metrics might differ from human judgment of quality.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n\nFigure 2: Transcript of a patient-doctor conversation\n
\n\n\n
\n
\n\n\nFigure 3: Excerpt of an AI-generated medical report. HPI stands for History of present illness.\n
", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Text Summarization with PyTorch and Fairseq", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "[PyTorch](https://pytorch.org/) is an open-source machine learning framework developed by Facebook that helps researchers prototype Deep Learning models. The [Fairseq](https://github.com/pytorch/fairseq) toolkit is built on top of PyTorch and focuses on sequence generation tasks, such as Neural Machine Translation (NMT) or Text Summarization. Fairseq features an active community that is continuously providing reference implementations of state-of-the-art models. It contains many built-in components (model", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "architectures, modules, loss functions, and optimizers) and is easily extendable with plugins.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Text summarization constitutes a significant challenge in NLP. We need models capable of generating a short version of a document while retaining the key points and avoiding uninformative content. These challenges can be addressed with different approaches. 1). Abstractive text summarization aimed at training models that can generate a summary in narrative form. 2). Extractive methods where the models are trained to select the most important parts from the input text. 3). A combination of the two, where", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "the essential parts from the input are selected and then summarized in an abstractive fashion. Hence, summarization can be accomplished via a single end-to-end network or as a pipeline of extractive and abstractive components. To that end, Fairseq provides all the necessary tools to be successful in our endeavor. It features either end-to-end models such as the classical Transformer, different types of Language Models and pre-trained versions that enable researchers to focus on what matters most\u2014to build", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "state-of-the-art models that generate valuable reports.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "However, we are not just summarizing the transcribed conversation; we generate high-quality medical reports, which have many considerations.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "* Every section of a medical report is different in terms of content, structure, fluency, etc.\n* All medical facts mentioned in the conversation should be present in the report, for example, a particular treatment or dosage.\n* In the healthcare domain, the vocabulary is extensive, and models need to deal with medical terminology.\n* Patient-doctor conversations are usually much longer than the final report.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "### [Beta] Conjugate View\nPyTorch\u2019s conjugation for complex tensors ([torch.conj()](https://pytorch.org/docs/1.10.0/generated/torch.conj.html?highlight=conj#torch.conj)) is now a constant time operation, and returns a view of the input tensor with a conjugate bit set as can be seen by calling [torch.is_conj()](https://pytorch.org/docs/1.10.0/generated/torch.is_conj.html?highlight=is_conj#torch.is_conj) . This has already been leveraged in various other PyTorch operations like matrix multiplication, dot product etc., to fuse conjugation with the operation leading to significant performance gain and memory savings on both CPU and CUDA.\n\n# Distributed Training", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
+{"page_content": "### Distributed Training Releases Now in Stable \nIn 1.10, there are a number of features that are moving from beta to stable in the distributed package:\n* **(Stable) Remote Module**: This feature allows users to operate a module on a remote worker like using a local module, where the RPCs are transparent to the user. Refer to this [documentation](https://pytorch.org/docs/master/rpc.html#remotemodule) for more details.\n* **(Stable) DDP Communication Hook**: This feature allows users to override how DDP synchronizes gradients across processes. Refer to this [documentation](https://pytorch.org/docs/master/rpc.html#remotemodule) for more details. \n* **(Stable) ZeroRedundancyOptimizer**: This feature can be used in conjunction with DistributedDataParallel to reduce the size of per-process optimizer states. With this stable release, it now can handle uneven inputs to different data-parallel workers. Check out this [tutorial](https://pytorch.org/tutorials/advanced/generic_join.html). We also improved the parameter partition algorithm to better balance memory and computation overhead across processes. Refer to this [documentation](https://pytorch.org/docs/master/distributed.optim.html) and this [tutorial](https://pytorch.org/tutorials/recipes/zero_redundancy_optimizer.html) to learn more.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
+{"page_content": "# Performance Optimization and Tooling\n\n### [Beta] Profile-directed typing in TorchScript \nTorchScript has a hard requirement for source code to have type annotations in order for compilation to be successful. For a long time, it was only possible to add missing or incorrect type annotations through trial and error (i.e., by fixing the type-checking errors generated by torch.jit.script one by one), which was inefficient and time consuming. \n\nNow, we have enabled profile directed typing for torch.jit.script by leveraging existing tools like MonkeyType, which makes the process much easier, faster, and more efficient. For more details, refer to the [documentation](https://pytorch.org/docs/1.9.0/jit.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
+{"page_content": "### (Beta) CPU Fusion \nIn PyTorch 1.10, we've added an LLVM-based JIT compiler for CPUs that can fuse together sequences of `torch` library calls to improve performance. While we've had this capability for some time on GPUs, this release is the first time we've brought compilation to the CPU. \nYou can check out a few performance results for yourself in this [Colab notebook](https://colab.research.google.com/drive/1xaH-L0XjsxUcS15GG220mtyrvIgDoZl6?usp=sharing).\n\n### (Beta) PyTorch Profiler \nThe objective of PyTorch Profiler is to target the execution steps that are the most costly in time and/or memory, and visualize the workload distribution between GPUs and CPUs. PyTorch 1.10 includes the following key features:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
+{"page_content": "* **Enhanced Memory View**: This helps you understand your memory usage better. This tool will help you avoid Out of Memory errors by showing active memory allocations at various points of your program run.\n* **Enhanced Automated Recommendations**: This helps provide automated performance recommendations to help optimize your model. The tools recommend changes to batch size, TensorCore, memory reduction technologies, etc.\n* **Enhanced Kernel View**: Additional columns show grid and block sizes as well as shared memory usage and registers per thread.\n* **Distributed Training**: Gloo is now supported for distributed training jobs.\n* **Correlate Operators in the Forward & Backward Pass**: This helps map the operators found in the forward pass to the backward pass, and vice versa, in a trace view.\n* **TensorCore**: This tool shows the Tensor Core (TC) usage and provides recommendations for data scientists and framework developers.\n* **NVTX**: Support for NVTX markers was ported from the legacy autograd profiler.\n* **Support for profiling on mobile devices**: The PyTorch profiler now has better integration with TorchScript and mobile backends, enabling trace collection for mobile workloads.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
+{"page_content": "Refer to this [documentation](https://pytorch.org/docs/stable/profiler.html) for details. Check out this [tutorial](https://pytorch.org/tutorials/recipes/recipes/profiler_recipe.html) to learn how to get started with this feature. \n\n# PyTorch Mobile\n\n### (Beta) Android NNAPI Support in Beta \nLast year we [released prototype support](https://medium.com/pytorch/pytorch-mobile-now-supports-android-nnapi-e2a2aeb74534) for Android\u2019s Neural Networks API (NNAPI). NNAPI allows Android apps to run computationally intensive neural networks on the most powerful and efficient parts of the chips that power mobile phones, including GPUs (Graphics Processing Units) and NPUs (specialized Neural Processing Units). \n\nSince the prototype we\u2019ve added more op coverage, added support for load-time flexible shapes and ability to run the model on the host for testing. Try out this feature using the [tutorial](https://pytorch.org/tutorials/prototype/nnapi_mobilenetv2.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
+{"page_content": "Additionally, Transfer Learning steps have been added to Object Detection examples. Check out this [GitHub page](https://github.com/pytorch/android-demo-app/tree/master/ObjectDetection#transfer-learning) to learn more. Please provide your feedback or ask questions on the [forum](https://discuss.pytorch.org/c/mobile/18). You can also check out [this presentation](https://www.youtube.com/watch?v=B-2spa3UCTU) to get an overview. \n\nThanks for reading. If you\u2019re interested in these updates and want to join the PyTorch community, we encourage you to join the [discussion forums](https://discuss.pytorch.org/) and [open GitHub issues](https://github.com/pytorch/pytorch/issues). To get the latest news from PyTorch, follow us on [Twitter](https://twitter.com/PyTorch), [Medium](https://medium.com/pytorch), [YouTube](https://www.youtube.com/pytorch), and [LinkedIn](https://www.linkedin.com/company/pytorch). \n\nCheers!\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"Ambient Clinical Intelligence: Generating Medical Reports with PyTorch\"\nauthor: Miguel Del-Agua, Principal Research Scientist, Nuance and Jeremy Jancsary, Senior Principal Research Scientist, Nuance\nfeatured-img: \"\"\n---\n\n## Introduction\n\nComplete and accurate clinical documentation is an essential tool for tracking patient care. It allows for treatment plans to be shared among care teams to aid in continuity of care and ensures a transparent and effective process for reimbursement.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Physicians are responsible for documenting patient care. Traditional clinical documentation methods have resulted in a sub-par patient-provider experience, less time interacting with patients, and decreased work-life balance. A significant amount of physicians\u2019 time is spent in front of the computer doing administrative tasks. As a result, patients are less satisfied with the overall experience, and physicians, who prepare for years studying medicine, cannot practice at the top of their license and are burned out. Every hour physicians provide direct clinical face time to patients results in nearly two additional hours spent on EHR and desk work within the clinic day. Outside office hours, physicians [spend another 1 to 2 hours of personal](https://www.acpjournals.org/doi/10.7326/m16-0961) time each night doing additional computer and other clerical work.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "* [42% of all physicians reported having burnout. \u2013 Medscape](https://www.medscape.com/slideshow/2020-lifestyle-burnout-6012460)\n* [The problem has grown worse due to the pandemic with 64% of U.S. physicians now reporting burnout. - AAFP](https://www.aafp.org/journals/fpm/blogs/inpractice/entry/covid_burnout_survey.html#:~:text=Physician%20burnout%20was%20already%20a,5%2C000%20%E2%80%94%20practice%20in%20the%20U.S.)\n* [\"Too many bureaucratic tasks e.g., charting and paperwork\" is the leading contribution to burnout, increased computerization ranks 4th.](https://login.medscape.com/login/sso/getlogin?urlCache=aHR0cHM6Ly93d3cubWVkc2NhcGUuY29tL3NsaWRlc2hvdy8yMDIwLWxpZmVzdHlsZS1idXJub3V0LTYwMTI0NjA%3D&ac=401) - Medscape\n* [75% of U.S. Consumers Wish Their Healthcare Experiences Were More Personalized,](https://www.businesswire.com/news/home/20200218005006/en/75-of-U.S.-Consumers-Wish-Their-Healthcare-Experiences-Were-More-Personalized-Redpoint-Global-Survey-Reveals)- Business Wire\n* [61% of patients would visit their healthcare provider more often if the communication experience felt more personalized.](https://www.businesswire.com/news/home/20200218005006/en/75-of-U.S.-Consumers-Wish-Their-Healthcare-Experiences-Were-More-Personalized-Redpoint-Global-Survey-Reveals) \u2013 Business Wire", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Physician burnout is one of the primary causes for increased [medical errors](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6175626/), malpractice suits, turnover, and decreased access to care. Burnout leads to an increase in healthcare costs and a decrease in overall patient satisfaction. [Burnout costs the United States $4.6 billion a year.](https://www.nejm.org/doi/full/10.1056/NEJMp2003149)\n\nWhat can we do to bring back trust, joy, and humanity to the delivery of healthcare? A significant portion of the administrative work consists of entering patient data into Electronic Health Records (EHRs) and creating clinical documentation. Clinical documentation is created from information already in the EHR as well as from the patient-provider encounter conversation.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "This article will showcase how the Nuance Dragon Ambient eXperience (DAX), an AI-powered, voice-enabled, ambient clinical intelligence solution, automatically documents patient encounters accurately and efficiently at the point of care and the technologies that enable it.\n\nNuance DAX enhances the quality of care and patient experience, increases provider efficiency and satisfaction, and improves financial outcomes. It can be used in office and telehealth settings in all ambulatory specialties, including primary and urgent care.\n\n\n
\n
\n\n## Natural Language Processing", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Natural Language Processing (NLP) is one of the most challenging fields in Artificial Intelligence (AI). It comprehends a set of algorithms that allow computers to understand or generate the language used by humans. These algorithms can process and analyze vast amounts of natural language data from different sources (either sound or text) to build models that can understand, classify, or even generate natural language as humans would. Like other fields in AI, NLP has significantly progressed thanks to the advent of Deep Learning (DL), which has resulted in models that can obtain results on par with humans in some tasks.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "These advanced NLP techniques are being applied in healthcare. During a typical patient-provider encounter, a conversation ensues where the doctor constructs, through questions and answers, a chronological description of the development of the patient's presenting illness or symptoms. A physician examines the patient and makes clinical decisions to establish a diagnosis and determine a treatment plan. This conversation, and data in the EHR, provide the required information for physicians to generate the clinical documentation, referred to as medical reports.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Two main NLP components play a role in automating the creation of clinical documentation. The first component, Automatic Speech Recognition (ASR), is used to translate speech into text. It takes the audio recording of the encounter and generates a conversation transcription (cf. Figure 2). The second component, Automatic Text Summarization, helps generate summaries from large text documents. This component is responsible for understanding and capturing the nuances and most essential aspects from the transcribed conversation into a final report in narrative form (cf. Figure 3), structured form, or a combination of both.\n\nWe will focus on this second component, Automatic Text Summarization, which is a difficult task with many challenges:", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "* Its performance is tied to the ASR quality from multiple speakers (noisy input).\n* The input is conversational in nature and contains layman's terms.\n* Protected Health Information (PHI) regulations limit medical data access.\n* The information for one output sentence is potentially spread across multiple conversation turns.\n* There is no explicit sentence alignment between input and output.\n* Various medical specialties, encounter types, and EHR systems constitute a broad and complex output space. \n* Physicians have different styles of conducting encounters and have their preferences for medical reports; there is no standard. \n* Standard summarization metrics might differ from human judgment of quality.\n\n\n
\n
\n\n\nFigure 2: Transcript of a patient-doctor conversation\n
\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "\nFigure 3: Excerpt of an AI-generated medical report. HPI stands for History of present illness.\n
\n\n## Text Summarization with PyTorch and Fairseq\n\n[PyTorch](https://pytorch.org/) is an open-source machine learning framework developed by Facebook that helps researchers prototype Deep Learning models. The [Fairseq](https://github.com/pytorch/fairseq) toolkit is built on top of PyTorch and focuses on sequence generation tasks, such as Neural Machine Translation (NMT) or Text Summarization. Fairseq features an active community that is continuously providing reference implementations of state-of-the-art models. It contains many built-in components (model architectures, modules, loss functions, and optimizers) and is easily extendable with plugins.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Text summarization constitutes a significant challenge in NLP. We need models capable of generating a short version of a document while retaining the key points and avoiding uninformative content. These challenges can be addressed with different approaches. 1). Abstractive text summarization aimed at training models that can generate a summary in narrative form. 2). Extractive methods where the models are trained to select the most important parts from the input text. 3). A combination of the two, where the essential parts from the input are selected and then summarized in an abstractive fashion. Hence, summarization can be accomplished via a single end-to-end network or as a pipeline of extractive and abstractive components. To that end, Fairseq provides all the necessary tools to be successful in our endeavor. It features either end-to-end models such as the classical Transformer, different types of Language Models and pre-trained versions that enable researchers to focus on what matters most\u2014to build state-of-the-art models that generate valuable reports.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "However, we are not just summarizing the transcribed conversation; we generate high-quality medical reports, which have many considerations.\n\n* Every section of a medical report is different in terms of content, structure, fluency, etc.\n* All medical facts mentioned in the conversation should be present in the report, for example, a particular treatment or dosage.\n* In the healthcare domain, the vocabulary is extensive, and models need to deal with medical terminology.\n* Patient-doctor conversations are usually much longer than the final report.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
{"page_content": "All these challenges require our researchers to run a battery of extensive experiments. Thanks to the flexibility of PyTorch and Fairseq, their productivity has greatly increased. Further, the ecosystem offers an easy path from ideation, implementation, experimentation, and final roll-out to production. Using multiple GPUs or CPUs is as simple as providing an additional argument to the tools, and because of the tight Python integration, PyTorch code can be easily debugged.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "In our continuous effort to contribute to the open-source community, features have been developed at Nuance and pushed to the Fairseq GitHub repository. These try to overcome some of the challenges mentioned such as, facilitating copying of, especially rare or unseen, words from the input to summary, training speedups by improving Tensor Core utilization, and ensuring TorchScript compatibility of different Transformer configurations. Following, we will show an example of how to train a Transformer model", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "with a Pointer Generator mechanism (Transformer-PG), which can copy words from the input.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "How to build a Transformer model with a Pointer Generator mechanism\n\nIn this step-by-step guide, it is assumed the user has already installed PyTorch and Fairseq.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "1. Create a vocabulary and extend it with source position markers:\n\nThese markers will allow the model to point to any word in the input sequence.\n\n```python\nvocab_size=\nposition_markers=512\nexport LC_ALL=C\ncat train.src train.tgt |\n tr -s '[:space:]' '\\n' |\n sort |\n uniq -c |\n sort -k1,1bnr -k2 |\n head -n \"$((vocab_size - 4))\" |\n awk '{ print $2 \" \" $1 }' > dict.pg.txt\npython3 -c \"[print(' 0'.format(n)) for n in range($position_markers)]\" >> dict.pg.txt", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "This will create a file \"dict.pg.txt\" that contains the \\ most frequent words followed by 512 position markers named from \"\\\" to \"\\\".\n\nIn case we have an input like\n\n```python\nsrc = \"Hello, I'm The Dogtor\"\n```\n\nit could happen that our model has been trained without the word \"Dogtor\" in its vocabulary. Therefore, when we feed this sequence into the model, it should be converted to:\n\n```python\nsrc = \"Hello, I'm The \"", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Now, \"\\\" is part of our vocabulary and could be predicted by the model (this is where the pointer-generator comes in). In such a case, we will only need to post-process the output to replace \"\\\" by the word at input position 3.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "2. Preprocess the text data to replace unknown words by its positional markers:\n\nWe can use the scripts from [https://github.com/pytorch/fairseq/tree/master/examples/pointer_generator](https://github.com/pytorch/fairseq/tree/master/examples/pointer_generator).", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```python\n# Considering we have our data in:\n# train_src = /path/to/train.src\n# train_tgt = /path/to/train.tgt\n# valid_src = /path/to/valid.src\n# valid_tgt = /path/to/valid.tgt\n./preprocess.py --source /path/to/train.src \\\n --target /path/to/train.tgt \\\n --vocab <(cut -d' ' -f1 dict.pg.txt) \\\n --source-out /path/to/train.pg.src \\\n --target-out /path/to/train.pg.tgt", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "./preprocess.py --source /path/to/valid.src \\\n --target /path/to/valid.tgt \\\n --vocab <(cut -d' ' -f1 dict.pg.txt) \\\n --source-out /path/to/valid.pg.src \\\n --target-out /path/to/valid.pg.tgt\n\n./preprocess.py --source /path/to/test.src \\\n --vocab <(cut -d' ' -f1 dict.pg.txt) \\\n --source-out /path/to/test.pg.src\n```", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "3. Now let's binarize the data, so that it can be processed faster:", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```python\nfairseq-preprocess --task \"translation\" \\\n --source-lang \"pg.src\" \\\n --target-lang \"pg.tgt\" \\\n --trainpref /path/to/train \\\n --validpref /path/to/valid \\\n --srcdict dict.pg.txt \\\n --cpu \\\n --joined-dictionary \\\n --destdir \n```", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "You might notice the type of task is \"translation\". This is because there is no \"summarization\" task available; we could understand it as a kind of NMT task where the input and output languages are shared and the output (summary) is shorter than the input.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "4. Now we can train the model:", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```python\nfairseq-train \\\n --save-dir \\\n --task \"translation\" \\\n --source-lang \"src\" \\\n --target-lang \"tgt\" \\\n --arch \"transformer_pointer_generator\" \\\n --max-source-positions 512 \\\n --max-target-positions 128 \\\n --truncate-source \\\n --max-tokens 2048 \\\n --required-batch-size-multiple 1 \\\n --required-seq-len-multiple 8 \\", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "--share-all-embeddings \\\n --dropout 0.1 \\\n --criterion \"cross_entropy\" \\\n --optimizer adam \\\n --adam-betas '(0.9, 0.98)' \\\n --adam-eps 1e-9 \\\n --update-freq 4 \\\n --lr 0.004 \\\n # Pointer Generator\n --alignment-layer -1 \\\n --alignment-heads 1 \\\n --source-position-markers 512", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "This configuration makes use of features Nuance has contributed back to Fairseq:\n\n* Transformer with a Pointer Generator mechanism to facilitate copying of words from the input.\n* Sequence length padded to a multiple of 8 to better use tensor cores and reduce training time.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "5. Now let's take a look at how to generate a summary with our new medical report generation system:\n\n```python\nimport torch\nfrom examples.pointer_generator.pointer_generator_src.transformer_pg import TransformerPointerGeneratorModel\n\n# Patient-Doctor conversation\ninput = \"[doctor] Lisa Simpson, thirty six year old female, presents to the clinic today because \" \\\n \"she has severe right wrist pain\"", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "# Load the model\nmodel = TransformerPointerGeneratorModel.from_pretrained(data_name_or_path=,\n model_name_or_path=,\n checkpoint_file=\"checkpoint_best.pt\")\n\nresult = model.translate([input], beam=2)\n\nprint(result[0])\nMs. is a 36-year-old female who presents to the clinic today for evaluation of her right wrist.\n```", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "6. Alternatively, we can use fairseq-interactive and a postprocessing tool to substitute positional unknown tokens by its words from the input:", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```python\nfairseq-interactive \\\n --batch-size \\\n --task translation \\\n --source-lang src \\\n --target-lang tgt \\\n --path /checkpoint_last.pt \\\n --input /path/to/test.pg.src \\\n --buffer-size 20 \\\n --max-len-a 0 \\\n --max-len-b 128 \\\n --beam 2 \\\n --skip-invalid-size-inputs-valid-test | tee generate.out", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "grep \"^H-\" generate.out | cut -f 3- > generate.hyp\n\n./postprocess.py \\\n\t--source <(awk 'NF<512' /path/to/test.pg.src) \\\n\t--target generate.hyp \\\n\t--target-out generate.hyp.processed", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Now we have the final set of reports in \"generate.hyp.processed\", with \"\\\" replaced by the original word from the input sequence.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Model Deployment\n\nPyTorch offers great flexibility in modeling and a rich surrounding ecosystem. However, while several recent articles have suggested that the use of PyTorch in research and academia may be close to surpassing TensorFlow, there seems to be an overall sense of TensorFlow being the preferred platform for deployment to production. Is this still the case in 2021? Teams looking to serve their PyTorch models in production have a few options.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Before describing our journey, let's take a brief detour and define the term model.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Models as computation graphs", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "A few years back, it was still common for machine learning toolkits to support only particular classes of models of a rather fixed and rigid structure, with only a few degrees of freedom (like the kernel of a support vector machine or the number of hidden layers of a neural network). Inspired by foundational work in Theano, toolkits like Microsoft's CNTK or Google's TensorFlow were among the first to popularize a more flexible view on models, as computation graphs with associated parameters that can be", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "estimated from data. This view blurred the boundaries between popular types of models (such as DNNs or SVMs), as it became easy to blend the characteristics of each into your type of graph. Still, such a graph had to be defined upfront before estimating its parameters, and it was pretty static. This made it easy to save models to a self-contained bundle, like a TensorFlow SavedModel (such a bundle simply contains the structure of the graph, as well as the concrete values of the estimated parameters).", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "However, debugging such models can be difficult because the statements in the Python code that build the graph are logically separate from the lines that execute it. Researchers also long for easier ways of expressing dynamic behavior, such as the computation steps of the forward pass of a model being conditionally dependent on its input data (or its previous output).", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Most recently, the above limitations have led to a second revolution spearheaded by PyTorch and TensorFlow 2. The computation graph is no longer defined explicitly. Instead, it will be populated implicitly as the Python code executes operations on tensor arguments. An essential technique that powers this development is automatic differentiation. As the computation graph is being built implicitly while executing the steps of the forward pass, all the necessary data will be tracked for later computation of", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "the gradient concerning the model parameters. This allows for great flexibility in training a model, but it raises an important question. If the computation happening inside a model is only implicitly defined through our Python code's steps as it executes concrete data, what is it that we want to save as a model? The answer \u2013 at least initially \u2013 was the Python code with all its dependencies, along with the estimated parameters. This is undesirable for practical reasons. For instance, there is a danger that", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "the team working on model deployment does not exactly reproduce the Python code dependencies used during training, leading to subtly divergent behavior. The solution typically consists of combining two techniques, scripting and tracing, that is, extra annotations in your Python code and execution of your code on exemplary input data, allowing PyTorch to define and save the graph that should be executed during later inference on new, unseen data. This requires some discipline by whoever creates the model", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "code (arguably voiding some of the original flexibility of eager execution), but it results in a self-contained model bundle in TorchScript format. The solution in TensorFlow 2 is remarkably similar.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Serving our report generation models", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Our journey in deploying the report generation models reflects the above discussion. We started out serving our models by deploying the model code and its dependencies along with the parameter checkpoints in a custom Docker image exposing a gRPC service interface. However, we soon noticed that it became error-prone to replicate the exact code and environment used by the modeling team while estimating the parameters. Moreover, this approach prevented us from leveraging high-performance model serving", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "frameworks like NVIDIA's Triton, which is written in C++ and requires self-contained models that can be used without a Python interpreter. At this stage, we were facing a choice between attempting to export our PyTorch models to ONNX or TorchScript format. ONNX is an open specification for representing machine learning models that increasingly finds adoption. It is powered by a high-performance runtime developed by Microsoft (ONNX Runtime). While we were able to achieve performance acceleration for our", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "TensorFlow BERT-based model using ONNX Runtime, at the time one of our PyTorch model required some operators that weren\u2019t yet supported in ONNX. Rather than implement these using custom operators, we decided to look into TorchScript for the time being.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "A maturing ecosystem", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Is it all roses? No, it has been a rockier journey than we expected. We encountered what seems to be a memory leak in the MKL libraries used by PyTorch while serving the PyTorch code directly. We encountered deadlocks in trying to load multiple models from multiple threads. We had difficulties exporting our models to ONNX and TorchScript formats. Models would not work out-of-the-box on hardware with multiple GPUs, they always accessed the particular GPU device on which they were exported. We encountered", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "excessive memory usage in the Triton inference server while serving TorchScript models, which we found out was due to automatic differentiation accidentally being enabled during the forward pass. However, the ecosystem keeps improving, and there is a helpful and vibrant open-source community eager to work with us to mitigate such issues.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Where to go from here? For those that require the flexibility of serving PyTorch code directly, without going through the extra step of exporting self-contained models, it is worth pointing out that the TorchServe project now provides a way of bundling the code together with parameter checkpoints into a single servable archive, greatly reducing the risk of code and parameters running apart. To us, however, exporting models to TorchScript has proven beneficial. It provides a clear interface between modeling", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "and deployment teams, and TorchScript further reduces the latency when serving models on GPU via its just-in-time compilation engine.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Scaling at large and the future", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Finally, efficient deployment to the cloud is about more than just computing the response of a single model instance efficiently. Flexibility is needed in managing, versioning and updating models. High-level scalability must be achieved via techniques such as load-balancing, horizontal scaling and vertical scaling. If many models are involved, scale-to-zero quickly becomes a topic as it is unacceptable to pay for serving models that do not answer any requests. Providing such extra functionality on top of a", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "low-level inference server like Triton is the job of an orchestration framework. After gaining some first experience with KubeFlow, to that end, we decided to turn our attention to Azure ML, which provides similar functionality but integrates more deeply with the Azure platform, on which we crucially rely for large parts of our technology stack already. This part of our journey has just begun.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Conclusion", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Academia has long recognized that we are \"standing on the shoulders of giants.\" As Artificial Intelligence is maturing from a scientific discipline into technology, the same spirit of collaboration that originally fueled its scientific foundation has carried over into the world of software engineering. Open-source enthusiasts join technology companies worldwide to build open software ecosystems that allow for new angles at solving some of the most pressing challenges of modern society. In this article,", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "we've taken a look at Nuance's [Dragon Ambient eXperience](http://www.nuance.com/ambient), an AI-powered, voice-enabled solution that automatically documents patient care, reducing healthcare providers' administrative burdens. Nuance DAX improves the patient-provider experience, reduces physician burnout, and improves financial outcomes. It brings back trust, joy, and humanity to the delivery of healthcare. Fairseq and PyTorch have proven to be an incredible platform for powering this AI technology, and in", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "turn, Nuance has contributed back some of its innovations in this space. For further reading, we invite you to take a look at our recent [ACL publication](https://www.aclweb.org/anthology/2020.nlpmc-1.4/) and the [Nuance \"What's Next\" blog](https://whatsnext.nuance.com/rd/using-deep-learning-to-generate-medical-reports/).", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'Microsoft becomes maintainer of the Windows version of PyTorch'\nauthor: Maxim Lukiyanov - Principal PM at Microsoft, Emad Barsoum - Group EM at Microsoft, Guoliang Hua - Principal EM at Microsoft, Nikita Shulga - Tech Lead at Facebook, Geeta Chauhan - PE Lead at Facebook, Chris Gottbrath - Technical PM at Facebook, Jiachen Pu - Engineer at Facebook", "metadata": {"source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Along with the PyTorch 1.6 release, we are excited to announce that Microsoft has expanded its participation in the PyTorch community and will be responsible for the development and maintenance of the PyTorch build for Windows.", "metadata": {"source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "According to the latest [Stack Overflow developer survey](https://insights.stackoverflow.com/survey/2020#technology-developers-primary-operating-systems), Windows remains the primary operating system for the developer community (46% Windows vs 28% MacOS). [Jiachen Pu](https://github.com/peterjc123) initially made a heroic effort to add support for PyTorch on Windows, but due to limited resources, Windows support for PyTorch has lagged behind other platforms. Lack of test coverage resulted in unexpected", "metadata": {"source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "issues popping up every now and then. Some of the core tutorials, meant for new users to learn and adopt PyTorch, would fail to run. The installation experience was also not as smooth, with the lack of official PyPI support for PyTorch on Windows. Lastly, some of the PyTorch functionality was simply not available on the Windows platform, such as the TorchAudio domain library and distributed training support. To help alleviate this pain, Microsoft is happy to bring its Windows expertise to the table and", "metadata": {"source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "bring PyTorch on Windows to its best possible self.", "metadata": {"source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "In the PyTorch 1.6 release, we have improved the core quality of the Windows build by bringing test coverage up to par with Linux for core PyTorch and its domain libraries and by automating tutorial testing. Thanks to the broader PyTorch community, which contributed TorchAudio support to Windows, we were able to add test coverage to all three domain libraries: TorchVision, TorchText and TorchAudio. In subsequent releases of PyTorch, we will continue improving the Windows experience based on community", "metadata": {"source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "feedback and requests. So far, the feedback we received from the community points to distributed training support and a better installation experience using pip as the next areas of improvement.", "metadata": {"source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "In addition to the native Windows experience, Microsoft released a preview adding [GPU compute support to Windows Subsystem for Linux (WSL) 2](https://blogs.windows.com/windowsdeveloper/2020/06/17/gpu-accelerated-ml-training-inside-the-windows-subsystem-for-linux/) distros, with a focus on enabling AI and ML developer workflows. WSL is designed for developers that want to run any Linux based tools directly on Windows. This preview enables valuable scenarios for a variety of frameworks and Python packages", "metadata": {"source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "that utilize [NVIDIA CUDA](https://developer.nvidia.com/cuda/wsl) for acceleration and only support Linux. This means WSL customers using the preview can run native Linux based PyTorch applications on Windows unmodified without the need for a traditional virtual machine or a dual boot setup.", "metadata": {"source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Getting started with PyTorch on Windows\nIt's easy to get started with PyTorch on Windows. To install PyTorch using Anaconda with the latest GPU support, run the command below. To install different supported configurations of PyTorch, refer to the installation instructions on [pytorch.org](https://pytorch.org).\n\n`conda install pytorch torchvision cudatoolkit=10.2 -c pytorch`", "metadata": {"source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Once you install PyTorch, learn more by visiting the [PyTorch Tutorials](https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html) and [documentation](https://pytorch.org/docs/stable/index.html).\n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Getting started with PyTorch on Windows Subsystem for Linux\nThe [preview of NVIDIA CUDA support in WSL](https://docs.microsoft.com/en-us/windows/win32/direct3d12/gpu-cuda-in-wsl) is now available to Windows Insiders running Build 20150 or higher. In WSL, the command to install PyTorch using Anaconda is the same as the above command for native Windows. If you prefer pip, use the command below.\n\n`pip install torch torchvision`", "metadata": {"source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "You can use the same tutorials and documentation inside your WSL environment as on native Windows. This functionality is still in preview so if you run into issues with WSL please share feedback via the [WSL GitHub repo](https://github.com/microsoft/WSL) or with NVIDIA CUDA support share via NVIDIA\u2019s [Community Forum for CUDA on WSL](https://forums.developer.nvidia.com/c/accelerated-computing/cuda/cuda-on-windows-subsystem-for-linux/303).", "metadata": {"source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Feedback\nIf you find gaps in the PyTorch experience on Windows, please let us know on the [PyTorch discussion forum](https://discuss.pytorch.org/c/windows/26) or file an issue on [GitHub](https://github.com/pytorch/pytorch) using the #module: windows label.", "metadata": {"source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"Accelerated PyTorch 2 Transformers\"\nauthor: Michael Gschwind, Driss Guessous, Christian Puhrsch\n---", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "The PyTorch 2.0 release includes a new high-performance implementation of the PyTorch Transformer API with the goal of making training and deployment of state-of-the-art Transformer models affordable. Following the successful release of \u201cfastpath\u201d inference execution (\u201cBetter Transformer\u201d), this release introduces high-performance support for training and inference using a custom kernel architecture for scaled dot product attention (SPDA).", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "You can take advantage of the new fused SDPA kernels either by calling the new SDPA operator directly (as described in the [SDPA tutorial](https://pytorch.org/tutorials/intermediate/scaled_dot_product_attention_tutorial.html#beta-implementing-high-performance-transformers-with-scaled-dot-product-attention-sdpa)), or transparently via integration into the pre-existing PyTorch Transformer API. All features of the PyTorch Transformer API will continue to work compatibly, with many features mapped to", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "high-performance SDPA kernels, while other features are impossible to support with higher performance (e.g., need_weights, as per below) while expanded high-performance support for other features may still be under active development. \\", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "\\", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "Similar to the \u201cfastpath\u201d architecture, custom kernels are fully integrated into the PyTorch Transformer API \u2013 thus, using the native Transformer and MultiHeadAttention API will enable users to transparently see significant speed improvements. Unlike the \u201cfastpath\u201d architecture, the newly introduced \u201ccustom kernels\u201d support many more use cases including models using Cross-Attention, Transformer Decoders, and for training models, in addition to the existing fastpath inference for fixed and variable sequence", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "length Transformer Encoder and Self Attention use cases.", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "To take full advantage of different hardware models and Transformer use cases, multiple SDPA custom kernels are supported, with custom kernel selection logic that will pick the highest-performance kernel for a given model and hardware type. In particular, the first custom kernels included with the PyTorch 2.0 release are the [Flash Attention](https://arxiv.org/abs/2205.14135) kernel (sdpa_flash, for 16-bit floating point training and inference on Nvidia GPUs with SM80+ architecture level) and the [xFormers", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "memory-efficient attention](https://github.com/facebookresearch/xformers) kernel (sdpa_mem_eff, for 16-bit and 32-bit floating point training and inference on a broad range of Nvidia GPUs). A general-purpose kernel sdpa_math provides an implementation when the custom kernels are not applicable.", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "As mentioned, custom kernels provide a wider range of support for execution scenarios To ensure efficient execution (e,g., to use GPU tensor cores), model configurations need to meet a small number of requirements. This list of requirements will evolve over time, prospectively relaxing constraints limiting the usage of currently supported custom kernels, or providing additional kernels in the future.", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "For the most up to date list of custom kernels and dispatch constraints, you can refer to [sdp_utils.h](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/transformers/cuda/sdp_utils.h). As of PyTorch 2.0, the existing fused SDPA kernels have the following constraints:", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "* Flash Attention only supports 16 bit floating point data types (float16 and bfloat16).\n* The head dimension must be a multiple of 8 for 16-bit floating point numbers and a multiple of 4 for 32-bit floating point numbers. At present, the maximum head_dim support for the Flash Attention custom kernel is 128.\n* The CUDA architecture level must be sm5x or better for the mem_efficient kernel, and sm80 for Flash Attention.", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "* Flash Attention supports arbitrary dropout, in PyTorch 2.0 the mem_efficient kernel does not support dropout (i.e., dropout must be set to zero for this kernel to be selected in PyTorch 2.0).", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "* To support variable-sequence length batches, all SDPA kernels support Nested Tensor inputs that combine input data and padding information using variable sequence length tensors for forward. (You can find more information about Nested Tensors in the [Nested Tensor tutorial](https://pytorch.org/tutorials/prototype/nestedtensor.html).)", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "* You can specify both a _key_padding_mask_ and an _attn_mask_ by combining them before passing them to the SDPA operator. In particular, you can use the per-batch-element key padding mask of the nn.Transformer API to implement training for variable-sequence length inputs in a batch.", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "* At present, the only attention mask supported by fused kernel implementation is the causal mask commonly used for training. To specify the causal mask in custom kernels, it must be specified with the _is_causal_ boolean and _attn_mask_ must be None. \n* Support for Nested Tensors is still under development. Specifically, in PyTorch 2.0, only the sdpa_math kernel supports training with Nested Tensors. Also, PyTorch 2.0 does not support Nested Tensors as part of code being compiled with torch.compile().", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "* The SDPA operator does not support returning averaged attention weights because computing them defeats the optimizations that enabled fused kernels to execute more efficiently. The argument _need_weights_ for torch.nn.MultiheadAttention's forward function defaults to True. In order to use the fused kernels, _need_weights_ needs to be set to _need_weights=False_.", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "We find that an attention mask is rarely used in real-world applications, except for the causal mask during training. Consequently, we reduce kernel complexity and compute cost by building in the option to use a causal mask as attention mask, and select this new capability with the _is_causal_ parameter introduced in conjunction with the new SDPA operator.", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "Providing the _is_causal_ Boolean flag for the frequently used causal mask also obviates the expensive and memory-intensive allocation of a causal mask, increasing training memory efficiency by allowing more memory to be used for large batch sizes, and reduce memory bandwidth and cache contention \u2013 which are both at a premium in GPU accelerators \u2013 by not needing to load an attention mask tensor.", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "If the constraints of none of the available custom kernels are met, then training falls back to using the default sdpa_math kernel, implementing the mathematical equations for scaled dot product attention using a sequence of PyTorch operator to implement SDPA. This is the most general \u201ccatch-all\u201d fallback kernel to ensure successful training for all models.", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "In addition to the existing Transformer API, model developers may also use the scaled dot product attention kernels directly by calling the new `scaled_dot_product_attention()` operator. This operator may be used to efficiently implement multi-head attention by combining it with in-projection and outprojection, as described in the [SDPA tutorial](https://pytorch.org/tutorials/intermediate/scaled_dot_product_attention_tutorial.html).", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "In addition to adding custom kernels, Accelerated PyTorch 2 Transformers are integrated with PyTorch 2.0 compilation. To use your model while benefiting from the additional acceleration of PT2-compilation (for inference or training), pre-process the model with\n\n\n```\nmodel = torch.compile(model)", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "We have achieved major speedups for training transformer models and in particular large language models with Accelerated PyTorch 2 Transformers using a combination of custom kernels and torch.compile().", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "{:width=\"100%\"}\nFigure: Using scaled dot product attention with custom kernels and torch.compile delivers significant speedups for training large language models, such as for [nanoGPT](https://github.com/karpathy/nanoGPT) shown here.", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "Finally, because the custom kernels are much more memory efficient, try to increase the size of training batches to achieve faster training with increased batch size.", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "In addition to automatic kernel selection, a context manager enables developers to override the kernel selection algorithm \u2013 this is not required for day to day operation, but enables developers to debug their code as well as enable performance engineers to override kernel selection. The SDPA tutorial provides additional information on using the SDPA context manager.", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "In addition to availability as part of the nn.Transformer API, Accelerated PyTorch 2 Transformer custom kernels are also available in conjunction with the torchtext, torchvision, and fairseq domain libraries with the launch of PyTorch 2.0.", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'Mapillary Research: Seamless Scene Segmentation and In-Place Activated BatchNorm'\nauthor: Lorenzo Porzi, Mapillary\nredirect_from: /2019/07/23/mapillary-research.html\n---", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "With roads in developed countries like the US changing up to 15% annually, Mapillary addresses a growing demand for keeping maps updated by combining images from any camera into a 3D visualization of the world. Mapillary's independent and collaborative approach enables anyone to collect, share, and use street-level images for improving maps, developing cities, and advancing the automotive industry.", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "Today, people and organizations all over the world have contributed more than 600 million images toward Mapillary's mission of helping people understand the world's places through images and making this data available, with clients and partners including the World Bank, HERE, and Toyota Research Institute.", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "Mapillary\u2019s computer vision technology brings intelligence to maps in an unprecedented way, increasing our overall understanding of the world. [Mapillary](https://www.mapillary.com/) runs state-of-the-art semantic image analysis and image-based 3d modeling at scale and on all its images. In this post we discuss two recent works from Mapillary Research and their implementations in PyTorch - Seamless Scene Segmentation [1] and In-Place Activated BatchNorm [2] - generating Panoptic segmentation results and", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "saving up to 50% of GPU memory during training, respectively.", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "Seamless Scene Segmentation\n\n_Github project page: [https://github.com/mapillary/seamseg/](https://github.com/mapillary/seamseg/)_\n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "The objective of Seamless Scene Segmentation is to predict a \u201cpanoptic\u201d segmentation [3] from an image, that is a complete labeling where each pixel is assigned with a class id and, where possible, an instance id. Like many modern CNNs dealing with instance detection and segmentation, we adopt the Mask R-CNN framework [4], using ResNet50 + FPN [5] as a backbone. This architecture works in two stages: first, the \u201cProposal Head\u201d selects a set of candidate bounding boxes on the image (i.e. the proposals) that", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "could contain an object; then, the \u201cMask Head\u201d focuses on each proposal, predicting its class and segmentation mask. The output of this process is a \u201csparse\u201d instance segmentation, covering only the parts of the image that contain countable objects (e.g. cars and pedestrians).", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "To complete our panoptic approach coined Seamless Scene Segmentation, we add a third stage to Mask R-CNN. Stemming from the same backbone, the \u201cSemantic Head\u201d predicts a dense semantic segmentation over the whole image, also accounting for the uncountable or amorphous classes (e.g. road and sky). The outputs of the Mask and Semantic heads are finally fused using a simple non-maximum suppression algorithm to generate the final panoptic prediction. All details about the actual network architecture, used", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "losses and underlying math can be found at the [project website](https://research.mapillary.com/publication/cvpr19a) for our CVPR 2019 paper [1].", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "While several versions of Mask R-CNN are publicly available, including an [official implementation](https://github.com/facebookresearch/Detectron) written in Caffe2, at Mapillary we decided to build Seamless Scene Segmentation from scratch using PyTorch, in order to have full control and understanding of the whole pipeline. While doing so we encountered a couple of main stumbling blocks, and had to come up with some creative workarounds we are going to describe next.", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "Dealing with variable-sized tensors", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "Something that sets aside panoptic segmentation networks from traditional CNNs is the prevalence of variable-sized data. In fact, many of the quantities we are dealing with cannot be easily represented with fixed sized tensors: each image contains a different number of objects, the Proposal head can produce a different number of proposals for each image, and the images themselves can have different sizes. While this is not a problem per-se -- one could just process images one at a time -- we would still", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "like to exploit batch-level parallelism as much as possible. Furthermore, when performing distributed training with multiple GPUs, `DistributedDataParallel` expects its inputs to be batched, uniformly-sized tensors.", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "Our solution to these issues is to wrap each batch of variable-sized tensors in a `PackedSequence`. `PackedSequence` is little more than a glorified list class for tensors, tagging its contents as \u201crelated\u201d, ensuring that they all share the same type, and providing useful methods like moving all the tensors to a particular device, etc. When performing light-weight operations that wouldn\u2019t be much faster with batch-level parallelism, we simply iterate over the contents of the `PackedSequence` in a for loop.", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "When performance is crucial, e.g. in the body of the network, we simply concatenate the contents of the PackedSequence, adding zero padding as required (like in RNNs with variable-length inputs), and keeping track of the original dimensions of each tensor.", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "`PackedSequence`s also help us deal with the second problem highlighted above. We slightly modify `DistributedDataParallel` to recognize `PackedSequence` inputs, splitting them in equally sized chunks and distributing their contents across the GPUs.", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "Asymmetric computational graphs with Distributed Data Parallel", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "Another, perhaps more subtle, peculiarity of our network is that it can generate asymmetric computational graphs across GPUs. In fact, some of the modules that compose the network are \u201coptional\u201d, in the sense that they are not always computed for all images. As an example, when the Proposal head doesn\u2019t output any proposal, the Mask head is not traversed at all. If we are training on multiple GPUs with `DistributedDataParallel`, this results in one of the replicas not computing gradients for the Mask head", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "parameters.", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "Prior to PyTorch 1.1, this resulted in a crash, so we had to develop a workaround. Our simple but effective solution was to compute a \u201cfake forward pass\u201d when no actual forward is required, i.e. something like this:\n\n```python\ndef fake_forward():\n fake_input = get_correctly_shaped_fake_input()\n fake_output = mask_head(fake_input)\n fake_loss = fake_output.sum() * 0\n return fake_loss", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "Here, we generate a batch of bogus data, pass it through the Mask head, and return a loss that always back-progates zeros to all parameters.\n\nStarting from PyTorch 1.1 this workaround is no longer required: by setting `find_unused_parameters=True` in the constructor, `DistributedDataParallel` is told to identify parameters whose gradients have not been computed by all replicas and correctly handle them. This leads to some substantial simplifications in our code base!", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "In-place Activated BatchNorm\n\n_Github project page: [https://github.com/mapillary/inplace_abn/](https://github.com/mapillary/inplace_abn/)_", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "Most researchers would probably agree that there are always constraints in terms of available GPU resources, regardless if their research lab has access to only a few or multiple thousands of GPUs. In a time where at Mapillary we still worked at rather few and mostly 12GB Titan X - style prosumer GPUs, we were searching for a solution that virtually enhances the usable memory during training, so we would be able to obtain and push state-of-the-art results on dense labeling tasks like semantic segmentation.", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "In-place activated BatchNorm is enabling us to use up to 50% more memory (at little computational overhead) and is therefore deeply integrated in all our current projects (including Seamless Scene Segmentation described above).", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "When processing a BN-Activation-Convolution sequence in the forward pass, most deep learning frameworks (including PyTorch) need to store two big buffers, i.e. the input x of BN and the input z of Conv. This is necessary because the standard implementations of the backward passes of BN and Conv depend on their inputs to calculate the gradients. Using InPlace-ABN to replace the BN-Activation sequence, we can safely discard x, thus saving up to 50% GPU memory at training time. To achieve this, we rewrite the", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "backward pass of BN in terms of its output y, which is in turn reconstructed from z by inverting the activation function.", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "The only limitation of InPlace-ABN is that it requires using an invertible activation function, such as leaky relu or elu. Except for this, it can be used as a direct, drop-in replacement for BN+activation modules in any network. Our native CUDA implementation offers minimal computational overhead compared to PyTorch\u2019s standard BN, and is available for anyone to use from here: [https://github.com/mapillary/inplace_abn/](https://github.com/mapillary/inplace_abn/).", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "Synchronized BN with asymmetric graphs and unbalanced batches", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "When training networks with synchronized SGD over multiple GPUs and/or multiple nodes, it\u2019s common practice to compute BatchNorm statistics separately on each device. However, in our experience working with semantic and panoptic segmentation networks, we found that accumulating mean and variance across all workers can bring a substantial boost in accuracy. This is particularly true when dealing with small batches, like in Seamless Scene Segmentation where we train with a single, super-high resolution image", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "per GPU.", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "InPlace-ABN supports synchronized operation over multiple GPUs and multiple nodes, and, since version 1.1, this can also be achieved in the standard PyTorch library using [SyncBatchNorm](https://pytorch.org/docs/stable/nn.html#syncbatchnorm). Compared to SyncBatchNorm, however, we support some additional functionality which is particularly important for Seamless Scene Segmentation: unbalanced batches and asymmetric graphs.", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "As mentioned before, Mask R-CNN-like networks naturally give rise to variable-sized tensors. Thus, in InPlace-ABN we calculate synchronized statistics using a variant of the parallel algorithm described [here](https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Parallel_algorithm), which properly takes into account the fact that each GPU can hold a different number of samples. PyTorch\u2019s SyncBatchNorm is currently being revised to support this, and the improved functionality will be available", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "in a future release.", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "Asymmetric graphs (in the sense mentioned above) are another complicating factor one has to deal with when creating a synchronized BatchNorm implementation. Luckily, PyTorch\u2019s distributed group functionality allows us to restrict distributed communication to a subset of workers, easily excluding those that are currently inactive. The only missing piece is that, in order to create a distributed group, each process needs to know the ids of all processes that will participate in the group, and even processes", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "that are not part of the group need to call the `new_group()` function. In InPlace-ABN we handle it with a function like this:", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "```python\nimport torch\nimport torch.distributed as distributed\n\ndef active_group(active):\n \"\"\"Initialize a distributed group where each process can independently decide whether to participate or not\"\"\"\n world_size = distributed.get_world_size()\n rank = distributed.get_rank()", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "# Gather active status from all workers\n active = torch.tensor(rank if active else -1, dtype=torch.long, device=torch.cuda.current_device())\n active_workers = torch.empty(world_size, dtype=torch.long, device=torch.cuda.current_device())\n distributed.all_gather(list(active_workers.unbind(0)), active)\n\n # Create group\n active_workers = [int(i) for i in active_workers.tolist() if i != -1]\n group = distributed.new_group(active_workers)\n return group", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "First each process, including inactive ones, communicates its status to all others through an `all_gather` call, then it creates the distributed group with the shared information. In the actual implementation we also include a caching mechanism for groups, since `new_group()` is usually too expensive to call at each batch.", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "References\n\n[1] Seamless Scene Segmentation; Lorenzo Porzi, Samuel Rota Bul\u00f2, Aleksander Colovic, Peter Kontschieder; Computer Vision and Pattern Recognition (CVPR), 2019\n\n[2] In-place Activated BatchNorm for Memory-Optimized Training of DNNs; Samuel Rota Bul\u00f2, Lorenzo Porzi, Peter Kontschieder; Computer Vision and Pattern Recognition (CVPR), 2018\n\n[3] Panoptic Segmentation; Alexander Kirillov, Kaiming He, Ross Girshick, Carsten Rother, Piotr Dollar; Computer Vision and Pattern Recognition (CVPR), 2019", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
-{"page_content": "[4] Mask R-CNN; Kaiming He, Georgia Gkioxari, Piotr Dollar, Ross Girshick; International Conference on Computer Vision (ICCV), 2017\n\n[5] Feature Pyramid Networks for Object Detection; Tsung-Yi Lin, Piotr Dollar, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie; Computer Vision and Pattern Recognition (CVPR), 2017", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
+{"page_content": "In our continuous effort to contribute to the open-source community, features have been developed at Nuance and pushed to the Fairseq GitHub repository. These try to overcome some of the challenges mentioned such as, facilitating copying of, especially rare or unseen, words from the input to summary, training speedups by improving Tensor Core utilization, and ensuring TorchScript compatibility of different Transformer configurations. Following, we will show an example of how to train a Transformer model with a Pointer Generator mechanism (Transformer-PG), which can copy words from the input.\n\n## How to build a Transformer model with a Pointer Generator mechanism\n\nIn this step-by-step guide, it is assumed the user has already installed PyTorch and Fairseq.\n\n### 1. Create a vocabulary and extend it with source position markers:\n\nThese markers will allow the model to point to any word in the input sequence.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```python\nvocab_size=\nposition_markers=512\nexport LC_ALL=C\ncat train.src train.tgt |\n tr -s '[:space:]' '\\n' |\n sort |\n uniq -c |\n sort -k1,1bnr -k2 |\n head -n \"$((vocab_size - 4))\" |\n awk '{ print $2 \" \" $1 }' > dict.pg.txt\npython3 -c \"[print(' 0'.format(n)) for n in range($position_markers)]\" >> dict.pg.txt\n```\n\nThis will create a file \"dict.pg.txt\" that contains the \\ most frequent words followed by 512 position markers named from \"\\\" to \"\\\".\n\nIn case we have an input like\n\n```python\nsrc = \"Hello, I'm The Dogtor\"\n```\n\nit could happen that our model has been trained without the word \"Dogtor\" in its vocabulary. Therefore, when we feed this sequence into the model, it should be converted to:\n\n```python\nsrc = \"Hello, I'm The \"\n```", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Now, \"\\\" is part of our vocabulary and could be predicted by the model (this is where the pointer-generator comes in). In such a case, we will only need to post-process the output to replace \"\\\" by the word at input position 3.\n\n### 2. Preprocess the text data to replace unknown words by its positional markers:\n\nWe can use the scripts from [https://github.com/pytorch/fairseq/tree/master/examples/pointer_generator](https://github.com/pytorch/fairseq/tree/master/examples/pointer_generator).\n\n```python\n# Considering we have our data in:\n# train_src = /path/to/train.src\n# train_tgt = /path/to/train.tgt\n# valid_src = /path/to/valid.src\n# valid_tgt = /path/to/valid.tgt\n./preprocess.py --source /path/to/train.src \\\n --target /path/to/train.tgt \\\n --vocab <(cut -d' ' -f1 dict.pg.txt) \\\n --source-out /path/to/train.pg.src \\\n --target-out /path/to/train.pg.tgt", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "./preprocess.py --source /path/to/valid.src \\\n --target /path/to/valid.tgt \\\n --vocab <(cut -d' ' -f1 dict.pg.txt) \\\n --source-out /path/to/valid.pg.src \\\n --target-out /path/to/valid.pg.tgt\n\n./preprocess.py --source /path/to/test.src \\\n --vocab <(cut -d' ' -f1 dict.pg.txt) \\\n --source-out /path/to/test.pg.src\n```\n\n### 3. Now let's binarize the data, so that it can be processed faster:", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```python\nfairseq-preprocess --task \"translation\" \\\n --source-lang \"pg.src\" \\\n --target-lang \"pg.tgt\" \\\n --trainpref /path/to/train \\\n --validpref /path/to/valid \\\n --srcdict dict.pg.txt \\\n --cpu \\\n --joined-dictionary \\\n --destdir \n```\t\t \n\t\t\t\t \nYou might notice the type of task is \"translation\". This is because there is no \"summarization\" task available; we could understand it as a kind of NMT task where the input and output languages are shared and the output (summary) is shorter than the input.\n\n### 4. Now we can train the model:", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```python\nfairseq-train \\\n --save-dir \\\n --task \"translation\" \\\n --source-lang \"src\" \\\n --target-lang \"tgt\" \\\n --arch \"transformer_pointer_generator\" \\\n --max-source-positions 512 \\\n --max-target-positions 128 \\\n --truncate-source \\\n --max-tokens 2048 \\\n --required-batch-size-multiple 1 \\\n --required-seq-len-multiple 8 \\\n --share-all-embeddings \\\n --dropout 0.1 \\\n --criterion \"cross_entropy\" \\\n --optimizer adam \\\n --adam-betas '(0.9, 0.98)' \\\n --adam-eps 1e-9 \\\n --update-freq 4 \\\n --lr 0.004 \\\n # Pointer Generator\n --alignment-layer -1 \\\n --alignment-heads 1 \\\n --source-position-markers 512\n```\n\nThis configuration makes use of features Nuance has contributed back to Fairseq:", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "* Transformer with a Pointer Generator mechanism to facilitate copying of words from the input.\n* Sequence length padded to a multiple of 8 to better use tensor cores and reduce training time.\n\n### 5. Now let's take a look at how to generate a summary with our new medical report generation system:\n\n```python\nimport torch\nfrom examples.pointer_generator.pointer_generator_src.transformer_pg import TransformerPointerGeneratorModel\n\n# Patient-Doctor conversation\ninput = \"[doctor] Lisa Simpson, thirty six year old female, presents to the clinic today because \" \\\n \"she has severe right wrist pain\"\n\n# Load the model\nmodel = TransformerPointerGeneratorModel.from_pretrained(data_name_or_path=,\n model_name_or_path=,\n checkpoint_file=\"checkpoint_best.pt\")\n\nresult = model.translate([input], beam=2)", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "print(result[0])\nMs. is a 36-year-old female who presents to the clinic today for evaluation of her right wrist.\n```\n\n### 6. Alternatively, we can use fairseq-interactive and a postprocessing tool to substitute positional unknown tokens by its words from the input:\n\n```python\nfairseq-interactive \\\n --batch-size \\\n --task translation \\\n --source-lang src \\\n --target-lang tgt \\\n --path /checkpoint_last.pt \\\n --input /path/to/test.pg.src \\\n --buffer-size 20 \\\n --max-len-a 0 \\\n --max-len-b 128 \\\n --beam 2 \\\n --skip-invalid-size-inputs-valid-test | tee generate.out\n\ngrep \"^H-\" generate.out | cut -f 3- > generate.hyp\n\n./postprocess.py \\\n\t--source <(awk 'NF<512' /path/to/test.pg.src) \\\n\t--target generate.hyp \\\n\t--target-out generate.hyp.processed\n```", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Now we have the final set of reports in \"generate.hyp.processed\", with \"\\\" replaced by the original word from the input sequence.\n\n## Model Deployment\n\nPyTorch offers great flexibility in modeling and a rich surrounding ecosystem. However, while several recent articles have suggested that the use of PyTorch in research and academia may be close to surpassing TensorFlow, there seems to be an overall sense of TensorFlow being the preferred platform for deployment to production. Is this still the case in 2021? Teams looking to serve their PyTorch models in production have a few options.\n\nBefore describing our journey, let's take a brief detour and define the term model.\n\n### Models as computation graphs", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "A few years back, it was still common for machine learning toolkits to support only particular classes of models of a rather fixed and rigid structure, with only a few degrees of freedom (like the kernel of a support vector machine or the number of hidden layers of a neural network). Inspired by foundational work in Theano, toolkits like Microsoft's CNTK or Google's TensorFlow were among the first to popularize a more flexible view on models, as computation graphs with associated parameters that can be estimated from data. This view blurred the boundaries between popular types of models (such as DNNs or SVMs), as it became easy to blend the characteristics of each into your type of graph. Still, such a graph had to be defined upfront before estimating its parameters, and it was pretty static. This made it easy to save models to a self-contained bundle, like a TensorFlow SavedModel (such a bundle simply contains the structure of the graph, as well as the concrete values of the estimated parameters). However, debugging such models can be difficult because the statements in the Python code that build the graph are logically separate from the lines that execute it. Researchers also long for easier ways of expressing dynamic behavior, such as the computation steps of the forward pass of a model being conditionally dependent on its input data (or its previous output).", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Most recently, the above limitations have led to a second revolution spearheaded by PyTorch and TensorFlow 2. The computation graph is no longer defined explicitly. Instead, it will be populated implicitly as the Python code executes operations on tensor arguments. An essential technique that powers this development is automatic differentiation. As the computation graph is being built implicitly while executing the steps of the forward pass, all the necessary data will be tracked for later computation of the gradient concerning the model parameters. This allows for great flexibility in training a model, but it raises an important question. If the computation happening inside a model is only implicitly defined through our Python code's steps as it executes concrete data, what is it that we want to save as a model? The answer \u2013 at least initially \u2013 was the Python code with all its dependencies, along with the estimated parameters. This is undesirable for practical reasons. For instance, there is a danger that the team working on model deployment does not exactly reproduce the Python code dependencies used during training, leading to subtly divergent behavior. The solution typically consists of combining two techniques, scripting and tracing, that is, extra annotations in your Python code and execution of your code on exemplary input data, allowing PyTorch to define and save the graph that should be executed during later inference on new, unseen data. This requires some discipline by whoever creates the model code (arguably voiding some of the original flexibility of eager execution), but it results in a self-contained model bundle in TorchScript format. The solution in TensorFlow 2 is remarkably similar.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "### Serving our report generation models", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Our journey in deploying the report generation models reflects the above discussion. We started out serving our models by deploying the model code and its dependencies along with the parameter checkpoints in a custom Docker image exposing a gRPC service interface. However, we soon noticed that it became error-prone to replicate the exact code and environment used by the modeling team while estimating the parameters. Moreover, this approach prevented us from leveraging high-performance model serving frameworks like NVIDIA's Triton, which is written in C++ and requires self-contained models that can be used without a Python interpreter. At this stage, we were facing a choice between attempting to export our PyTorch models to ONNX or TorchScript format. ONNX is an open specification for representing machine learning models that increasingly finds adoption. It is powered by a high-performance runtime developed by Microsoft (ONNX Runtime). While we were able to achieve performance acceleration for our TensorFlow BERT-based model using ONNX Runtime, at the time one of our PyTorch model required some operators that weren\u2019t yet supported in ONNX. Rather than implement these using custom operators, we decided to look into TorchScript for the time being.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "### A maturing ecosystem\n\nIs it all roses? No, it has been a rockier journey than we expected. We encountered what seems to be a memory leak in the MKL libraries used by PyTorch while serving the PyTorch code directly. We encountered deadlocks in trying to load multiple models from multiple threads. We had difficulties exporting our models to ONNX and TorchScript formats. Models would not work out-of-the-box on hardware with multiple GPUs, they always accessed the particular GPU device on which they were exported. We encountered excessive memory usage in the Triton inference server while serving TorchScript models, which we found out was due to automatic differentiation accidentally being enabled during the forward pass. However, the ecosystem keeps improving, and there is a helpful and vibrant open-source community eager to work with us to mitigate such issues.", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Where to go from here? For those that require the flexibility of serving PyTorch code directly, without going through the extra step of exporting self-contained models, it is worth pointing out that the TorchServe project now provides a way of bundling the code together with parameter checkpoints into a single servable archive, greatly reducing the risk of code and parameters running apart. To us, however, exporting models to TorchScript has proven beneficial. It provides a clear interface between modeling and deployment teams, and TorchScript further reduces the latency when serving models on GPU via its just-in-time compilation engine.\n\n### Scaling at large and the future", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Finally, efficient deployment to the cloud is about more than just computing the response of a single model instance efficiently. Flexibility is needed in managing, versioning and updating models. High-level scalability must be achieved via techniques such as load-balancing, horizontal scaling and vertical scaling. If many models are involved, scale-to-zero quickly becomes a topic as it is unacceptable to pay for serving models that do not answer any requests. Providing such extra functionality on top of a low-level inference server like Triton is the job of an orchestration framework. After gaining some first experience with KubeFlow, to that end, we decided to turn our attention to Azure ML, which provides similar functionality but integrates more deeply with the Azure platform, on which we crucially rely for large parts of our technology stack already. This part of our journey has just begun.\n\n## Conclusion", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Academia has long recognized that we are \"standing on the shoulders of giants.\" As Artificial Intelligence is maturing from a scientific discipline into technology, the same spirit of collaboration that originally fueled its scientific foundation has carried over into the world of software engineering. Open-source enthusiasts join technology companies worldwide to build open software ecosystems that allow for new angles at solving some of the most pressing challenges of modern society. In this article, we've taken a look at Nuance's [Dragon Ambient eXperience](http://www.nuance.com/ambient), an AI-powered, voice-enabled solution that automatically documents patient care, reducing healthcare providers' administrative burdens. Nuance DAX improves the patient-provider experience, reduces physician burnout, and improves financial outcomes. It brings back trust, joy, and humanity to the delivery of healthcare. Fairseq and PyTorch have proven to be an incredible platform for powering this AI technology, and in turn, Nuance has contributed back some of its innovations in this space. For further reading, we invite you to take a look at our recent [ACL publication](https://www.aclweb.org/anthology/2020.nlpmc-1.4/) and the [Nuance \"What's Next\" blog](https://whatsnext.nuance.com/rd/using-deep-learning-to-generate-medical-reports/).", "metadata": {"source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'Microsoft becomes maintainer of the Windows version of PyTorch'\nauthor: Maxim Lukiyanov - Principal PM at Microsoft, Emad Barsoum - Group EM at Microsoft, Guoliang Hua - Principal EM at Microsoft, Nikita Shulga - Tech Lead at Facebook, Geeta Chauhan - PE Lead at Facebook, Chris Gottbrath - Technical PM at Facebook, Jiachen Pu - Engineer at Facebook\n\n---\n\nAlong with the PyTorch 1.6 release, we are excited to announce that Microsoft has expanded its participation in the PyTorch community and will be responsible for the development and maintenance of the PyTorch build for Windows.", "metadata": {"source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "According to the latest [Stack Overflow developer survey](https://insights.stackoverflow.com/survey/2020#technology-developers-primary-operating-systems), Windows remains the primary operating system for the developer community (46% Windows vs 28% MacOS). [Jiachen Pu](https://github.com/peterjc123) initially made a heroic effort to add support for PyTorch on Windows, but due to limited resources, Windows support for PyTorch has lagged behind other platforms. Lack of test coverage resulted in unexpected issues popping up every now and then. Some of the core tutorials, meant for new users to learn and adopt PyTorch, would fail to run. The installation experience was also not as smooth, with the lack of official PyPI support for PyTorch on Windows. Lastly, some of the PyTorch functionality was simply not available on the Windows platform, such as the TorchAudio domain library and distributed training support. To help alleviate this pain, Microsoft is happy to bring its Windows expertise to the table and bring PyTorch on Windows to its best possible self.", "metadata": {"source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "In the PyTorch 1.6 release, we have improved the core quality of the Windows build by bringing test coverage up to par with Linux for core PyTorch and its domain libraries and by automating tutorial testing. Thanks to the broader PyTorch community, which contributed TorchAudio support to Windows, we were able to add test coverage to all three domain libraries: TorchVision, TorchText and TorchAudio. In subsequent releases of PyTorch, we will continue improving the Windows experience based on community feedback and requests. So far, the feedback we received from the community points to distributed training support and a better installation experience using pip as the next areas of improvement.", "metadata": {"source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "In addition to the native Windows experience, Microsoft released a preview adding [GPU compute support to Windows Subsystem for Linux (WSL) 2](https://blogs.windows.com/windowsdeveloper/2020/06/17/gpu-accelerated-ml-training-inside-the-windows-subsystem-for-linux/) distros, with a focus on enabling AI and ML developer workflows. WSL is designed for developers that want to run any Linux based tools directly on Windows. This preview enables valuable scenarios for a variety of frameworks and Python packages that utilize [NVIDIA CUDA](https://developer.nvidia.com/cuda/wsl) for acceleration and only support Linux. This means WSL customers using the preview can run native Linux based PyTorch applications on Windows unmodified without the need for a traditional virtual machine or a dual boot setup.", "metadata": {"source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "## Getting started with PyTorch on Windows\nIt's easy to get started with PyTorch on Windows. To install PyTorch using Anaconda with the latest GPU support, run the command below. To install different supported configurations of PyTorch, refer to the installation instructions on [pytorch.org](https://pytorch.org).\n\n`conda install pytorch torchvision cudatoolkit=10.2 -c pytorch`\n\nOnce you install PyTorch, learn more by visiting the [PyTorch Tutorials](https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html) and [documentation](https://pytorch.org/docs/stable/index.html).\n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "## Getting started with PyTorch on Windows Subsystem for Linux\nThe [preview of NVIDIA CUDA support in WSL](https://docs.microsoft.com/en-us/windows/win32/direct3d12/gpu-cuda-in-wsl) is now available to Windows Insiders running Build 20150 or higher. In WSL, the command to install PyTorch using Anaconda is the same as the above command for native Windows. If you prefer pip, use the command below.\n\n`pip install torch torchvision`\n\nYou can use the same tutorials and documentation inside your WSL environment as on native Windows. This functionality is still in preview so if you run into issues with WSL please share feedback via the [WSL GitHub repo](https://github.com/microsoft/WSL) or with NVIDIA CUDA support share via NVIDIA\u2019s [Community Forum for CUDA on WSL](https://forums.developer.nvidia.com/c/accelerated-computing/cuda/cuda-on-windows-subsystem-for-linux/303).", "metadata": {"source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "## Feedback\nIf you find gaps in the PyTorch experience on Windows, please let us know on the [PyTorch discussion forum](https://discuss.pytorch.org/c/windows/26) or file an issue on [GitHub](https://github.com/pytorch/pytorch) using the #module: windows label.", "metadata": {"source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"Accelerated PyTorch 2 Transformers\"\nauthor: Michael Gschwind, Driss Guessous, Christian Puhrsch\n---\n\nThe PyTorch 2.0 release includes a new high-performance implementation of the PyTorch Transformer API with the goal of making training and deployment of state-of-the-art Transformer models affordable. Following the successful release of \u201cfastpath\u201d inference execution (\u201cBetter Transformer\u201d), this release introduces high-performance support for training and inference using a custom kernel architecture for scaled dot product attention (SPDA).", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
+{"page_content": "You can take advantage of the new fused SDPA kernels either by calling the new SDPA operator directly (as described in the [SDPA tutorial](https://pytorch.org/tutorials/intermediate/scaled_dot_product_attention_tutorial.html#beta-implementing-high-performance-transformers-with-scaled-dot-product-attention-sdpa)), or transparently via integration into the pre-existing PyTorch Transformer API. All features of the PyTorch Transformer API will continue to work compatibly, with many features mapped to high-performance SDPA kernels, while other features are impossible to support with higher performance (e.g., need_weights, as per below) while expanded high-performance support for other features may still be under active development. \\\n \\\nSimilar to the \u201cfastpath\u201d architecture, custom kernels are fully integrated into the PyTorch Transformer API \u2013 thus, using the native Transformer and MultiHeadAttention API will enable users to transparently see significant speed improvements. Unlike the \u201cfastpath\u201d architecture, the newly introduced \u201ccustom kernels\u201d support many more use cases including models using Cross-Attention, Transformer Decoders, and for training models, in addition to the existing fastpath inference for fixed and variable sequence length Transformer Encoder and Self Attention use cases.", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
+{"page_content": "To take full advantage of different hardware models and Transformer use cases, multiple SDPA custom kernels are supported, with custom kernel selection logic that will pick the highest-performance kernel for a given model and hardware type. In particular, the first custom kernels included with the PyTorch 2.0 release are the [Flash Attention](https://arxiv.org/abs/2205.14135) kernel (sdpa_flash, for 16-bit floating point training and inference on Nvidia GPUs with SM80+ architecture level) and the [xFormers memory-efficient attention](https://github.com/facebookresearch/xformers) kernel (sdpa_mem_eff, for 16-bit and 32-bit floating point training and inference on a broad range of Nvidia GPUs). A general-purpose kernel sdpa_math provides an implementation when the custom kernels are not applicable.", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
+{"page_content": "As mentioned, custom kernels provide a wider range of support for execution scenarios To ensure efficient execution (e,g., to use GPU tensor cores), model configurations need to meet a small number of requirements. This list of requirements will evolve over time, prospectively relaxing constraints limiting the usage of currently supported custom kernels, or providing additional kernels in the future.\n\nFor the most up to date list of custom kernels and dispatch constraints, you can refer to [sdp_utils.h](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/transformers/cuda/sdp_utils.h). As of PyTorch 2.0, the existing fused SDPA kernels have the following constraints:", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
+{"page_content": "* Flash Attention only supports 16 bit floating point data types (float16 and bfloat16).\n* The head dimension must be a multiple of 8 for 16-bit floating point numbers and a multiple of 4 for 32-bit floating point numbers. At present, the maximum head_dim support for the Flash Attention custom kernel is 128.\n* The CUDA architecture level must be sm5x or better for the mem_efficient kernel, and sm80 for Flash Attention.\n* Flash Attention supports arbitrary dropout, in PyTorch 2.0 the mem_efficient kernel does not support dropout (i.e., dropout must be set to zero for this kernel to be selected in PyTorch 2.0). \n* To support variable-sequence length batches, all SDPA kernels support Nested Tensor inputs that combine input data and padding information using variable sequence length tensors for forward. (You can find more information about Nested Tensors in the [Nested Tensor tutorial](https://pytorch.org/tutorials/prototype/nestedtensor.html).)\n* You can specify both a _key_padding_mask_ and an _attn_mask_ by combining them before passing them to the SDPA operator. In particular, you can use the per-batch-element key padding mask of the nn.Transformer API to implement training for variable-sequence length inputs in a batch.\n* At present, the only attention mask supported by fused kernel implementation is the causal mask commonly used for training. To specify the causal mask in custom kernels, it must be specified with the _is_causal_ boolean and _attn_mask_ must be None. \n* Support for Nested Tensors is still under development. Specifically, in PyTorch 2.0, only the sdpa_math kernel supports training with Nested Tensors. Also, PyTorch 2.0 does not support Nested Tensors as part of code being compiled with torch.compile(). \n* The SDPA operator does not support returning averaged attention weights because computing them defeats the optimizations that enabled fused kernels to execute more efficiently. The argument _need_weights_ for torch.nn.MultiheadAttention's forward function defaults to True. In order to use the fused kernels, _need_weights_ needs to be set to _need_weights=False_.", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
+{"page_content": "We find that an attention mask is rarely used in real-world applications, except for the causal mask during training. Consequently, we reduce kernel complexity and compute cost by building in the option to use a causal mask as attention mask, and select this new capability with the _is_causal_ parameter introduced in conjunction with the new SDPA operator. \n\nProviding the _is_causal_ Boolean flag for the frequently used causal mask also obviates the expensive and memory-intensive allocation of a causal mask, increasing training memory efficiency by allowing more memory to be used for large batch sizes, and reduce memory bandwidth and cache contention \u2013 which are both at a premium in GPU accelerators \u2013 by not needing to load an attention mask tensor.", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
+{"page_content": "If the constraints of none of the available custom kernels are met, then training falls back to using the default sdpa_math kernel, implementing the mathematical equations for scaled dot product attention using a sequence of PyTorch operator to implement SDPA. This is the most general \u201ccatch-all\u201d fallback kernel to ensure successful training for all models.\n\nIn addition to the existing Transformer API, model developers may also use the scaled dot product attention kernels directly by calling the new `scaled_dot_product_attention()` operator. This operator may be used to efficiently implement multi-head attention by combining it with in-projection and outprojection, as described in the [SDPA tutorial](https://pytorch.org/tutorials/intermediate/scaled_dot_product_attention_tutorial.html).", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
+{"page_content": "In addition to adding custom kernels, Accelerated PyTorch 2 Transformers are integrated with PyTorch 2.0 compilation. To use your model while benefiting from the additional acceleration of PT2-compilation (for inference or training), pre-process the model with\n\n\n```\nmodel = torch.compile(model)\n```\n\n\nWe have achieved major speedups for training transformer models and in particular large language models with Accelerated PyTorch 2 Transformers using a combination of custom kernels and torch.compile(). \n\n\n{:width=\"100%\"}\nFigure: Using scaled dot product attention with custom kernels and torch.compile delivers significant speedups for training large language models, such as for [nanoGPT](https://github.com/karpathy/nanoGPT) shown here.", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
+{"page_content": "Finally, because the custom kernels are much more memory efficient, try to increase the size of training batches to achieve faster training with increased batch size.\n\nIn addition to automatic kernel selection, a context manager enables developers to override the kernel selection algorithm \u2013 this is not required for day to day operation, but enables developers to debug their code as well as enable performance engineers to override kernel selection. The SDPA tutorial provides additional information on using the SDPA context manager.\n\nIn addition to availability as part of the nn.Transformer API, Accelerated PyTorch 2 Transformer custom kernels are also available in conjunction with the torchtext, torchvision, and fairseq domain libraries with the launch of PyTorch 2.0.", "metadata": {"source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'Mapillary Research: Seamless Scene Segmentation and In-Place Activated BatchNorm'\nauthor: Lorenzo Porzi, Mapillary\nredirect_from: /2019/07/23/mapillary-research.html\n---\n\nWith roads in developed countries like the US changing up to 15% annually, Mapillary addresses a growing demand for keeping maps updated by combining images from any camera into a 3D visualization of the world. Mapillary's independent and collaborative approach enables anyone to collect, share, and use street-level images for improving maps, developing cities, and advancing the automotive industry.\n\nToday, people and organizations all over the world have contributed more than 600 million images toward Mapillary's mission of helping people understand the world's places through images and making this data available, with clients and partners including the World Bank, HERE, and Toyota Research Institute.", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
+{"page_content": "Mapillary\u2019s computer vision technology brings intelligence to maps in an unprecedented way, increasing our overall understanding of the world. [Mapillary](https://www.mapillary.com/) runs state-of-the-art semantic image analysis and image-based 3d modeling at scale and on all its images. In this post we discuss two recent works from Mapillary Research and their implementations in PyTorch - Seamless Scene Segmentation [1] and In-Place Activated BatchNorm [2] - generating Panoptic segmentation results and saving up to 50% of GPU memory during training, respectively.\n\n## Seamless Scene Segmentation\n\n_Github project page: [https://github.com/mapillary/seamseg/](https://github.com/mapillary/seamseg/)_\n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
+{"page_content": "The objective of Seamless Scene Segmentation is to predict a \u201cpanoptic\u201d segmentation [3] from an image, that is a complete labeling where each pixel is assigned with a class id and, where possible, an instance id. Like many modern CNNs dealing with instance detection and segmentation, we adopt the Mask R-CNN framework [4], using ResNet50 + FPN [5] as a backbone. This architecture works in two stages: first, the \u201cProposal Head\u201d selects a set of candidate bounding boxes on the image (i.e. the proposals) that could contain an object; then, the \u201cMask Head\u201d focuses on each proposal, predicting its class and segmentation mask. The output of this process is a \u201csparse\u201d instance segmentation, covering only the parts of the image that contain countable objects (e.g. cars and pedestrians).", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
+{"page_content": "To complete our panoptic approach coined Seamless Scene Segmentation, we add a third stage to Mask R-CNN. Stemming from the same backbone, the \u201cSemantic Head\u201d predicts a dense semantic segmentation over the whole image, also accounting for the uncountable or amorphous classes (e.g. road and sky). The outputs of the Mask and Semantic heads are finally fused using a simple non-maximum suppression algorithm to generate the final panoptic prediction. All details about the actual network architecture, used losses and underlying math can be found at the [project website](https://research.mapillary.com/publication/cvpr19a) for our CVPR 2019 paper [1].", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
+{"page_content": "While several versions of Mask R-CNN are publicly available, including an [official implementation](https://github.com/facebookresearch/Detectron) written in Caffe2, at Mapillary we decided to build Seamless Scene Segmentation from scratch using PyTorch, in order to have full control and understanding of the whole pipeline. While doing so we encountered a couple of main stumbling blocks, and had to come up with some creative workarounds we are going to describe next.\n\n## Dealing with variable-sized tensors", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
+{"page_content": "Something that sets aside panoptic segmentation networks from traditional CNNs is the prevalence of variable-sized data. In fact, many of the quantities we are dealing with cannot be easily represented with fixed sized tensors: each image contains a different number of objects, the Proposal head can produce a different number of proposals for each image, and the images themselves can have different sizes. While this is not a problem per-se -- one could just process images one at a time -- we would still like to exploit batch-level parallelism as much as possible. Furthermore, when performing distributed training with multiple GPUs, `DistributedDataParallel` expects its inputs to be batched, uniformly-sized tensors.\n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
+{"page_content": "Our solution to these issues is to wrap each batch of variable-sized tensors in a `PackedSequence`. `PackedSequence` is little more than a glorified list class for tensors, tagging its contents as \u201crelated\u201d, ensuring that they all share the same type, and providing useful methods like moving all the tensors to a particular device, etc. When performing light-weight operations that wouldn\u2019t be much faster with batch-level parallelism, we simply iterate over the contents of the `PackedSequence` in a for loop. When performance is crucial, e.g. in the body of the network, we simply concatenate the contents of the PackedSequence, adding zero padding as required (like in RNNs with variable-length inputs), and keeping track of the original dimensions of each tensor.\n\n`PackedSequence`s also help us deal with the second problem highlighted above. We slightly modify `DistributedDataParallel` to recognize `PackedSequence` inputs, splitting them in equally sized chunks and distributing their contents across the GPUs.", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
+{"page_content": "## Asymmetric computational graphs with Distributed Data Parallel\n\nAnother, perhaps more subtle, peculiarity of our network is that it can generate asymmetric computational graphs across GPUs. In fact, some of the modules that compose the network are \u201coptional\u201d, in the sense that they are not always computed for all images. As an example, when the Proposal head doesn\u2019t output any proposal, the Mask head is not traversed at all. If we are training on multiple GPUs with `DistributedDataParallel`, this results in one of the replicas not computing gradients for the Mask head parameters.\n\nPrior to PyTorch 1.1, this resulted in a crash, so we had to develop a workaround. Our simple but effective solution was to compute a \u201cfake forward pass\u201d when no actual forward is required, i.e. something like this:\n\n```python\ndef fake_forward():\n fake_input = get_correctly_shaped_fake_input()\n fake_output = mask_head(fake_input)\n fake_loss = fake_output.sum() * 0\n return fake_loss\n```", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
+{"page_content": "Here, we generate a batch of bogus data, pass it through the Mask head, and return a loss that always back-progates zeros to all parameters.\n\nStarting from PyTorch 1.1 this workaround is no longer required: by setting `find_unused_parameters=True` in the constructor, `DistributedDataParallel` is told to identify parameters whose gradients have not been computed by all replicas and correctly handle them. This leads to some substantial simplifications in our code base!\n\n## In-place Activated BatchNorm\n\n_Github project page: [https://github.com/mapillary/inplace_abn/](https://github.com/mapillary/inplace_abn/)_", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
+{"page_content": "Most researchers would probably agree that there are always constraints in terms of available GPU resources, regardless if their research lab has access to only a few or multiple thousands of GPUs. In a time where at Mapillary we still worked at rather few and mostly 12GB Titan X - style prosumer GPUs, we were searching for a solution that virtually enhances the usable memory during training, so we would be able to obtain and push state-of-the-art results on dense labeling tasks like semantic segmentation. In-place activated BatchNorm is enabling us to use up to 50% more memory (at little computational overhead) and is therefore deeply integrated in all our current projects (including Seamless Scene Segmentation described above).\n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
+{"page_content": "When processing a BN-Activation-Convolution sequence in the forward pass, most deep learning frameworks (including PyTorch) need to store two big buffers, i.e. the input x of BN and the input z of Conv. This is necessary because the standard implementations of the backward passes of BN and Conv depend on their inputs to calculate the gradients. Using InPlace-ABN to replace the BN-Activation sequence, we can safely discard x, thus saving up to 50% GPU memory at training time. To achieve this, we rewrite the backward pass of BN in terms of its output y, which is in turn reconstructed from z by inverting the activation function.", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
+{"page_content": "The only limitation of InPlace-ABN is that it requires using an invertible activation function, such as leaky relu or elu. Except for this, it can be used as a direct, drop-in replacement for BN+activation modules in any network. Our native CUDA implementation offers minimal computational overhead compared to PyTorch\u2019s standard BN, and is available for anyone to use from here: [https://github.com/mapillary/inplace_abn/](https://github.com/mapillary/inplace_abn/).\n\n## Synchronized BN with asymmetric graphs and unbalanced batches", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
+{"page_content": "When training networks with synchronized SGD over multiple GPUs and/or multiple nodes, it\u2019s common practice to compute BatchNorm statistics separately on each device. However, in our experience working with semantic and panoptic segmentation networks, we found that accumulating mean and variance across all workers can bring a substantial boost in accuracy. This is particularly true when dealing with small batches, like in Seamless Scene Segmentation where we train with a single, super-high resolution image per GPU.\n\nInPlace-ABN supports synchronized operation over multiple GPUs and multiple nodes, and, since version 1.1, this can also be achieved in the standard PyTorch library using [SyncBatchNorm](https://pytorch.org/docs/stable/nn.html#syncbatchnorm). Compared to SyncBatchNorm, however, we support some additional functionality which is particularly important for Seamless Scene Segmentation: unbalanced batches and asymmetric graphs.", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
+{"page_content": "As mentioned before, Mask R-CNN-like networks naturally give rise to variable-sized tensors. Thus, in InPlace-ABN we calculate synchronized statistics using a variant of the parallel algorithm described [here](https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Parallel_algorithm), which properly takes into account the fact that each GPU can hold a different number of samples. PyTorch\u2019s SyncBatchNorm is currently being revised to support this, and the improved functionality will be available in a future release.", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
+{"page_content": "Asymmetric graphs (in the sense mentioned above) are another complicating factor one has to deal with when creating a synchronized BatchNorm implementation. Luckily, PyTorch\u2019s distributed group functionality allows us to restrict distributed communication to a subset of workers, easily excluding those that are currently inactive. The only missing piece is that, in order to create a distributed group, each process needs to know the ids of all processes that will participate in the group, and even processes that are not part of the group need to call the `new_group()` function. In InPlace-ABN we handle it with a function like this:\n\n```python\nimport torch\nimport torch.distributed as distributed\n\ndef active_group(active):\n \"\"\"Initialize a distributed group where each process can independently decide whether to participate or not\"\"\"\n world_size = distributed.get_world_size()\n rank = distributed.get_rank()", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
+{"page_content": "# Gather active status from all workers\n active = torch.tensor(rank if active else -1, dtype=torch.long, device=torch.cuda.current_device())\n active_workers = torch.empty(world_size, dtype=torch.long, device=torch.cuda.current_device())\n distributed.all_gather(list(active_workers.unbind(0)), active)\n\n # Create group\n active_workers = [int(i) for i in active_workers.tolist() if i != -1]\n group = distributed.new_group(active_workers)\n return group\n```\n\nFirst each process, including inactive ones, communicates its status to all others through an `all_gather` call, then it creates the distributed group with the shared information. In the actual implementation we also include a caching mechanism for groups, since `new_group()` is usually too expensive to call at each batch.\n\n## References\n\n[1] Seamless Scene Segmentation; Lorenzo Porzi, Samuel Rota Bul\u00f2, Aleksander Colovic, Peter Kontschieder; Computer Vision and Pattern Recognition (CVPR), 2019", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
+{"page_content": "[2] In-place Activated BatchNorm for Memory-Optimized Training of DNNs; Samuel Rota Bul\u00f2, Lorenzo Porzi, Peter Kontschieder; Computer Vision and Pattern Recognition (CVPR), 2018\n\n[3] Panoptic Segmentation; Alexander Kirillov, Kaiming He, Ross Girshick, Carsten Rother, Piotr Dollar; Computer Vision and Pattern Recognition (CVPR), 2019\n\n[4] Mask R-CNN; Kaiming He, Georgia Gkioxari, Piotr Dollar, Ross Girshick; International Conference on Computer Vision (ICCV), 2017\n\n[5] Feature Pyramid Networks for Object Detection; Tsung-Yi Lin, Piotr Dollar, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie; Computer Vision and Pattern Recognition (CVPR), 2017", "metadata": {"source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"}}
{"page_content": "---\nlayout: blog_detail\ntitle: 'Introduction to Quantization on PyTorch'\nauthor: Raghuraman Krishnamoorthi, James Reed, Min Ni, Chris Gottbrath, and Seth Weidman\n---\n\nIt\u2019s important to make efficient use of both server-side and on-device compute resources when developing machine learning applications. To support more efficient deployment on servers and edge devices, PyTorch added a support for model quantization using the familiar eager mode Python API.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Quantization leverages 8bit integer (int8) instructions to reduce the model size and run the inference faster (reduced latency) and can be the difference between a model achieving quality of service goals or even fitting into the resources available on a mobile device. Even when resources aren\u2019t quite so constrained it may enable you to deploy a larger and more accurate model. Quantization is available in PyTorch starting in version 1.3 and with the release of PyTorch 1.4 we published quantized models for", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "ResNet, ResNext, MobileNetV2, GoogleNet, InceptionV3 and ShuffleNetV2 in the PyTorch torchvision 0.5 library.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "This blog post provides an overview of the quantization support on PyTorch and its incorporation with the TorchVision domain library.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "**What is Quantization?**\n\nQuantization refers to techniques for doing both computations and memory accesses with lower precision data, usually int8 compared to floating point implementations. This enables performance gains in several important areas:\n* 4x reduction in model size;\n* 2-4x reduction in memory bandwidth;\n* 2-4x faster inference due to savings in memory bandwidth and faster compute with int8 arithmetic (the exact speed up varies depending on the hardware, the runtime, and the model).", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Quantization does not however come without additional cost. Fundamentally quantization means introducing approximations and the resulting networks have slightly less accuracy. These techniques attempt to minimize the gap between the full floating point accuracy and the quantized accuracy.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "We designed quantization to fit into the PyTorch framework. The means that:\n1. PyTorch has data types corresponding to [quantized tensors](https://github.com/pytorch/pytorch/wiki/Introducing-Quantized-Tensor), which share many of the features of tensors.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "2. One can write kernels with quantized tensors, much like kernels for floating point tensors to customize their implementation. PyTorch supports quantized modules for common operations as part of the `torch.nn.quantized` and `torch.nn.quantized.dynamic` name-space.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "3. Quantization is compatible with the rest of PyTorch: quantized models are traceable and scriptable. The quantization method is virtually identical for both server and mobile backends. One can easily mix quantized and floating point operations in a model.\n4. Mapping of floating point tensors to quantized tensors is customizable with user defined observer/fake-quantization blocks. PyTorch provides default implementations that should work for most use cases.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
\n\nWe developed three techniques for quantizing neural networks in PyTorch as part of quantization tooling in the `torch.quantization` name-space.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "**The Three Modes of Quantization Supported in PyTorch starting version 1.3**", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "1. ### **Dynamic Quantization**", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "The easiest method of quantization PyTorch supports is called **dynamic quantization**. This involves not just converting the weights to int8 - as happens in all quantization variants - but also converting the activations to int8 on the fly, just before doing the computation (hence \u201cdynamic\u201d). The computations will thus be performed using efficient int8 matrix multiplication and convolution implementations, resulting in faster compute. However, the activations are read and written to memory in floating", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "point format.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "* **PyTorch API**: we have a simple API for dynamic quantization in PyTorch. `torch.quantization.quantize_dynamic` takes in a model, as well as a couple other arguments, and produces a quantized model! Our [end-to-end tutorial](https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html) illustrates this for a BERT model; while the tutorial is long and contains sections on loading pre-trained models and other concepts unrelated to quantization, the part the quantizes the BERT model", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "is simply:", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```python\n import torch.quantization\n quantized_model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)\n ```\n * See the documentation for the function [here](https://pytorch.org/docs/stable/quantization.html#torch.quantization.quantize_dynamic) an end-to-end example in our tutorials [here](https://pytorch.org/tutorials/advanced/dynamic_quantization_tutorial.html) and [here](https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html).", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "2. ### **Post-Training Static Quantization**", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "One can further improve the performance (latency) by converting networks to use both integer arithmetic and int8 memory accesses. Static quantization performs the additional step of first feeding batches of data through the network and computing the resulting distributions of the different activations (specifically, this is done by inserting \u201cobserver\u201d modules at different points that record these distributions). This information is used to determine how specifically the different activations should be", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "quantized at inference time (a simple technique would be to simply divide the entire range of activations into 256 levels, but we support more sophisticated methods as well). Importantly, this additional step allows us to pass quantized values between operations instead of converting these values to floats - and then back to ints - between every operation, resulting in a significant speed-up.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "With this release, we\u2019re supporting several features that allow users to optimize their static quantization:\n 1. Observers: you can customize observer modules which specify how statistics are collected prior to quantization to try out more advanced methods to quantize your data.\n 2. Operator fusion: you can fuse multiple operations into a single operation, saving on memory access while also improving the operation\u2019s numerical accuracy.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "3. Per-channel quantization: we can independently quantize weights for each output channel in a convolution/linear layer, which can lead to higher accuracy with almost the same speed.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "* ### **PyTorch API**:\n * To fuse modules, we have `torch.quantization.fuse_modules`\n * Observers are inserted using `torch.quantization.prepare`\n * Finally, quantization itself is done using `torch.quantization.convert`", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "We have a tutorial with an end-to-end example of quantization (this same tutorial also covers our third quantization method, quantization-aware training), but because of our simple API, the three lines that perform post-training static quantization on the pre-trained model `myModel` are:\n ```python\n # set quantization config for server (x86)\n deploymentmyModel.qconfig = torch.quantization.get_default_config('fbgemm')", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "# insert observers\n torch.quantization.prepare(myModel, inplace=True)\n # Calibrate the model and collect statistics\n\n # convert to quantized version\n torch.quantization.convert(myModel, inplace=True)", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "3. ### **Quantization Aware Training**", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "**Quantization-aware training(QAT)** is the third method, and the one that typically results in highest accuracy of these three. With QAT, all weights and activations are \u201cfake quantized\u201d during both the forward and backward passes of training: that is, float values are rounded to mimic int8 values, but all computations are still done with floating point numbers. Thus, all the weight adjustments during training are made while \u201caware\u201d of the fact that the model will ultimately be quantized; after", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "quantizing, therefore, this method usually yields higher accuracy than the other two methods.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "* ### **PyTorch API**:\n * `torch.quantization.prepare_qat` inserts fake quantization modules to model quantization.\n * Mimicking the static quantization API, `torch.quantization.convert` actually quantizes the model once training is complete.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "For example, in [the end-to-end example](https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html), we load in a pre-trained model as `qat_model`, then we simply perform quantization-aware training using:\n\n ```python\n # specify quantization config for QAT\n qat_model.qconfig=torch.quantization.get_default_qat_qconfig('fbgemm')\n\n # prepare QAT\n torch.quantization.prepare_qat(qat_model, inplace=True)", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "# convert to quantized version, removing dropout, to check for accuracy on each\n epochquantized_model=torch.quantization.convert(qat_model.eval(), inplace=False)\n ```", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "**Device and Operator Support**\nQuantization support is restricted to a subset of available operators, depending on the method being used, for a list of supported operators, please see the documentation at [https://pytorch.org/docs/stable/quantization.html](https://pytorch.org/docs/stable/quantization.html).", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "The set of available operators and the quantization numerics also depend on the backend being used to run quantized models. Currently quantized operators are supported only for CPU inference in the following backends: x86 and ARM. Both the quantization configuration (how tensors should be quantized and the quantized kernels (arithmetic with quantized tensors) are backend dependent. One can specify the backend by doing:", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```python\nimport torchbackend='fbgemm'\n# 'fbgemm' for server, 'qnnpack' for mobile\nmy_model.qconfig = torch.quantization.get_default_qconfig(backend)\n# prepare and convert model\n# Set the backend on which the quantized kernels need to be run\ntorch.backends.quantized.engine=backend", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Quantization leverages 8bit integer (int8) instructions to reduce the model size and run the inference faster (reduced latency) and can be the difference between a model achieving quality of service goals or even fitting into the resources available on a mobile device. Even when resources aren\u2019t quite so constrained it may enable you to deploy a larger and more accurate model. Quantization is available in PyTorch starting in version 1.3 and with the release of PyTorch 1.4 we published quantized models for ResNet, ResNext, MobileNetV2, GoogleNet, InceptionV3 and ShuffleNetV2 in the PyTorch torchvision 0.5 library.\n\nThis blog post provides an overview of the quantization support on PyTorch and its incorporation with the TorchVision domain library.\n\n## **What is Quantization?**", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Quantization refers to techniques for doing both computations and memory accesses with lower precision data, usually int8 compared to floating point implementations. This enables performance gains in several important areas:\n* 4x reduction in model size;\n* 2-4x reduction in memory bandwidth;\n* 2-4x faster inference due to savings in memory bandwidth and faster compute with int8 arithmetic (the exact speed up varies depending on the hardware, the runtime, and the model).\n\nQuantization does not however come without additional cost. Fundamentally quantization means introducing approximations and the resulting networks have slightly less accuracy. These techniques attempt to minimize the gap between the full floating point accuracy and the quantized accuracy.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "We designed quantization to fit into the PyTorch framework. The means that:\n1. PyTorch has data types corresponding to [quantized tensors](https://github.com/pytorch/pytorch/wiki/Introducing-Quantized-Tensor), which share many of the features of tensors.\n2. One can write kernels with quantized tensors, much like kernels for floating point tensors to customize their implementation. PyTorch supports quantized modules for common operations as part of the `torch.nn.quantized` and `torch.nn.quantized.dynamic` name-space.\n3. Quantization is compatible with the rest of PyTorch: quantized models are traceable and scriptable. The quantization method is virtually identical for both server and mobile backends. One can easily mix quantized and floating point operations in a model.\n4. Mapping of floating point tensors to quantized tensors is customizable with user defined observer/fake-quantization blocks. PyTorch provides default implementations that should work for most use cases.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "\n

\n
\n\nWe developed three techniques for quantizing neural networks in PyTorch as part of quantization tooling in the `torch.quantization` name-space.\n\n## **The Three Modes of Quantization Supported in PyTorch starting version 1.3**", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "1. ### **Dynamic Quantization**\n The easiest method of quantization PyTorch supports is called **dynamic quantization**. This involves not just converting the weights to int8 - as happens in all quantization variants - but also converting the activations to int8 on the fly, just before doing the computation (hence \u201cdynamic\u201d). The computations will thus be performed using efficient int8 matrix multiplication and convolution implementations, resulting in faster compute. However, the activations are read and written to memory in floating point format.\n * **PyTorch API**: we have a simple API for dynamic quantization in PyTorch. `torch.quantization.quantize_dynamic` takes in a model, as well as a couple other arguments, and produces a quantized model! Our [end-to-end tutorial](https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html) illustrates this for a BERT model; while the tutorial is long and contains sections on loading pre-trained models and other concepts unrelated to quantization, the part the quantizes the BERT model is simply:", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```python\n import torch.quantization\n quantized_model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)\n ```\n * See the documentation for the function [here](https://pytorch.org/docs/stable/quantization.html#torch.quantization.quantize_dynamic) an end-to-end example in our tutorials [here](https://pytorch.org/tutorials/advanced/dynamic_quantization_tutorial.html) and [here](https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html).\n\n2. ### **Post-Training Static Quantization**", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "One can further improve the performance (latency) by converting networks to use both integer arithmetic and int8 memory accesses. Static quantization performs the additional step of first feeding batches of data through the network and computing the resulting distributions of the different activations (specifically, this is done by inserting \u201cobserver\u201d modules at different points that record these distributions). This information is used to determine how specifically the different activations should be quantized at inference time (a simple technique would be to simply divide the entire range of activations into 256 levels, but we support more sophisticated methods as well). Importantly, this additional step allows us to pass quantized values between operations instead of converting these values to floats - and then back to ints - between every operation, resulting in a significant speed-up.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "With this release, we\u2019re supporting several features that allow users to optimize their static quantization:\n 1. Observers: you can customize observer modules which specify how statistics are collected prior to quantization to try out more advanced methods to quantize your data.\n 2. Operator fusion: you can fuse multiple operations into a single operation, saving on memory access while also improving the operation\u2019s numerical accuracy.\n 3. Per-channel quantization: we can independently quantize weights for each output channel in a convolution/linear layer, which can lead to higher accuracy with almost the same speed.\n\n * ### **PyTorch API**:\n * To fuse modules, we have `torch.quantization.fuse_modules`\n * Observers are inserted using `torch.quantization.prepare`\n * Finally, quantization itself is done using `torch.quantization.convert`", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "We have a tutorial with an end-to-end example of quantization (this same tutorial also covers our third quantization method, quantization-aware training), but because of our simple API, the three lines that perform post-training static quantization on the pre-trained model `myModel` are:\n ```python\n # set quantization config for server (x86)\n deploymentmyModel.qconfig = torch.quantization.get_default_config('fbgemm')\n\n # insert observers\n torch.quantization.prepare(myModel, inplace=True)\n # Calibrate the model and collect statistics\n\n # convert to quantized version\n torch.quantization.convert(myModel, inplace=True)\n ```", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "3. ### **Quantization Aware Training**\n **Quantization-aware training(QAT)** is the third method, and the one that typically results in highest accuracy of these three. With QAT, all weights and activations are \u201cfake quantized\u201d during both the forward and backward passes of training: that is, float values are rounded to mimic int8 values, but all computations are still done with floating point numbers. Thus, all the weight adjustments during training are made while \u201caware\u201d of the fact that the model will ultimately be quantized; after quantizing, therefore, this method usually yields higher accuracy than the other two methods.\n* ### **PyTorch API**:\n * `torch.quantization.prepare_qat` inserts fake quantization modules to model quantization.\n * Mimicking the static quantization API, `torch.quantization.convert` actually quantizes the model once training is complete.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "For example, in [the end-to-end example](https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html), we load in a pre-trained model as `qat_model`, then we simply perform quantization-aware training using:\n\n ```python\n # specify quantization config for QAT\n qat_model.qconfig=torch.quantization.get_default_qat_qconfig('fbgemm')\n\n # prepare QAT\n torch.quantization.prepare_qat(qat_model, inplace=True)\n\n # convert to quantized version, removing dropout, to check for accuracy on each\n epochquantized_model=torch.quantization.convert(qat_model.eval(), inplace=False)\n ```\n\n### **Device and Operator Support**\nQuantization support is restricted to a subset of available operators, depending on the method being used, for a list of supported operators, please see the documentation at [https://pytorch.org/docs/stable/quantization.html](https://pytorch.org/docs/stable/quantization.html).", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "The set of available operators and the quantization numerics also depend on the backend being used to run quantized models. Currently quantized operators are supported only for CPU inference in the following backends: x86 and ARM. Both the quantization configuration (how tensors should be quantized and the quantized kernels (arithmetic with quantized tensors) are backend dependent. One can specify the backend by doing:\n\n```python\nimport torchbackend='fbgemm'\n# 'fbgemm' for server, 'qnnpack' for mobile\nmy_model.qconfig = torch.quantization.get_default_qconfig(backend)\n# prepare and convert model\n# Set the backend on which the quantized kernels need to be run\ntorch.backends.quantized.engine=backend\n```", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
{"page_content": "However, quantization aware training occurs in full floating point and can run on either GPU or CPU. Quantization aware training is typically only used in CNN models when post training static or dynamic quantization doesn\u2019t yield sufficient accuracy. This can occur with models that are highly optimized to achieve small size (such as Mobilenet).", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "**Integration in torchvision**\nWe\u2019ve also enabled quantization for some of the most popular models in [torchvision](https://github.com/pytorch/vision/tree/master/torchvision/models/quantization): Googlenet, Inception, Resnet, ResNeXt, Mobilenet and Shufflenet. We have upstreamed these changes to torchvision in three forms:\n1. Pre-trained quantized weights so that you can use them right away.\n2. Quantization ready model definitions so that you can do post-training quantization or quantization aware training.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "3. A script for doing quantization aware training \u2014 which is available for any of these model though, as you will learn below, we only found it necessary for achieving accuracy with Mobilenet.\n4. We also have a [tutorial](https://pytorch.org/tutorials/intermediate/quantized_transfer_learning_tutorial.html) showing how you can do transfer learning with quantization using one of the torchvision models.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "**Choosing an approach**\nThe choice of which scheme to use depends on multiple factors:\n1. Model/Target requirements: Some models might be sensitive to quantization, requiring quantization aware training.\n2. Operator/Backend support: Some backends require fully quantized operators.\n\nCurrently, operator coverage is limited and may restrict the choices listed in the table below:\nThe table below provides a guideline.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "\n\n \n Model Type | \n Preferred scheme | \n Why | \n
\n \n LSTM/RNN | \n Dynamic Quantization | ", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Throughput dominated by compute/memory bandwidth for weights | \n
\n \n BERT/Transformer | \n Dynamic Quantization | \n Throughput dominated by compute/memory bandwidth for weights | \n
\n \n CNN | \n Static Quantization | \n Throughput limited by memory bandwidth for activations | \n
\n ", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "CNN | \n Quantization Aware Training | \n In the case where accuracy can't be achieved with static quantization | \n
\n
", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "**Performance Results**\nQuantization provides a 4x reduction in the model size and a speedup of 2x to 3x compared to floating point implementations depending on the hardware platform and the model being benchmarked. Some sample results are:", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n \n Model | \n Float Latency (ms) | \n Quantized Latency (ms) | \n Inference Performance Gain | \n Device | \n Notes | \n
\n \n BERT | \n 581 | \n 313 | \n 1.8x | ", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Xeon-D2191 (1.6GHz) | \n Batch size = 1, Maximum sequence length= 128, Single thread, x86-64, Dynamic quantization | \n
\n \n Resnet-50 | \n 214 | \n 103 | \n 2x | \n Xeon-D2191 (1.6GHz) | \n Single thread, x86-64, Static quantization | \n
\n ", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Mobilenet-v2 | \n 97 | \n 17 | \n 5.7x | \n Samsung S9 | \n Static quantization, Floating point numbers are based on Caffe2 run-time and are not optimized | \n
\n
\n
", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "**Accuracy results**\nWe also compared the accuracy of static quantized models with the floating point models on Imagenet. For dynamic quantization, we [compared](https://github.com/huggingface/transformers/blob/master/examples/run_glue.py) the F1 score of BERT on the GLUE benchmark for MRPC.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "**Computer Vision Model accuracy**", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "\n \n Model | \n Top-1 Accuracy (Float) | \n Top-1 Accuracy (Quantized) | \n Quantization scheme | \n
\n \n Googlenet | \n 69.8 | \n 69.7 | \n Static post training quantization | \n
\n \n Inception-v3 | \n 77.5 | ", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "77.1 | \n Static post training quantization | \n
\n \n ResNet-18 | \n 69.8 | \n 69.4 | \n Static post training quantization | \n
\n \n Resnet-50 | \n 76.1 | \n 75.9 | \n Static post training quantization | \n
\n ", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "ResNext-101 32x8d | \n 79.3 | \n 79 | \n Static post training quantization | \n
\n \n Mobilenet-v2 | \n 71.9 | \n 71.6 | \n Quantization Aware Training | \n
\n \n Shufflenet-v2 | \n 69.4 | \n 68.4 | ", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Static post training quantization | \n
\n
", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "**Speech and NLP Model accuracy**\n\n\n
\n \n Model | \n F1 (GLUEMRPC) Float | \n F1 (GLUEMRPC) Quantized | \n Quantization scheme | \n
\n \n BERT | \n 0.902 | \n 0.895 | \n Dynamic quantization | \n
\n
\n
", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "**Conclusion**", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "To get started on quantizing your models in PyTorch, start with [the tutorials on the PyTorch website](https://pytorch.org/tutorials/#model-optimization). If you are working with sequence data start with [dynamic quantization for LSTM](https://pytorch.org/tutorials/advanced/dynamic_quantization_tutorial.html), or [BERT](https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html). If you are working with image data then we recommend starting with the [transfer learning with", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "quantization](https://pytorch.org/tutorials/intermediate/quantized_transfer_learning_tutorial.html) tutorial. Then you can explore [static post training quantization](https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html). If you find that the accuracy drop with post training quantization is too high, then try [quantization aware training](https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html).", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "If you run into issues you can get community help by posting in at [discuss.pytorch.org](https://discuss.pytorch.org/), use the quantization category for quantization related issues.\n\n_This post is authored by Raghuraman Krishnamoorthi, James Reed, Min Ni, Chris Gottbrath and Seth Weidman. Special thanks to Jianyu Huang, Lingyi Liu and Haixin Liu for producing quantization metrics included in this post._", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "**Further reading**:\n1. PyTorch quantization presentation at Neurips: [(https://research.fb.com/wp-content/uploads/2019/12/2.-Quantization.pptx)](https://research.fb.com/wp-content/uploads/2019/12/2.-Quantization.pptx)\n2. Quantized Tensors [(https://github.com/pytorch/pytorch/wiki/\nIntroducing-Quantized-Tensor)](https://github.com/pytorch/pytorch/wiki/Introducing-Quantized-Tensor)\n3. Quantization RFC on Github [(https://github.com/pytorch/pytorch/", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "issues/18318)](https://github.com/pytorch/pytorch/issues/18318)", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"Scaling PyTorch FSDP for Training Foundation Models on IBM Cloud\"\nauthor: Linsong Chu, Less Wright, Hamid Shojanazeri, Sophia Wen, Raghu Ganti, Geeta Chauhan\nfeatured-img: \"/assets/images/scaling-pytorch-fsdp-image1-IBM_scaling_FSDP_visual_new.png\"\n---", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "Large model training using a cloud native approach is of growing interest for many enterprises given the emergence and success of [foundation models](https://research.ibm.com/blog/what-are-foundation-models). Some AI practitioners may assume that the only way they can achieve high GPU utilization for distributed training jobs is to run them on HPC systems, such as those inter-connected with Infiniband and may not consider Ethernet connected systems. We demonstrate how the latest distributed training", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "technique, Fully Sharded Data Parallel (FSDP) from PyTorch, successfully scales to models of size 10B+ parameters using commodity Ethernet networking in IBM Cloud.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch FSDP Scaling", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "As models get larger, the standard techniques for data parallel training work only if the GPU can hold a full replica of the model, along with its training state (optimizer, activations, etc.). However, GPU memory increases have not kept up with the model size increases and new techniques for training such models have emerged (e.g., Fully Sharded Data Parallel, [DeepSpeed](https://www.deepspeed.ai/)), which allow us to efficiently distribute the model and data over multiple GPUs during training. In this", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "blog post, we demonstrate a path to achieve remarkable scaling of model training to 64 nodes (512 GPUs) using PyTorch native FSDP APIs as we increase model sizes to 11B.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "What is Fully Sharded Data Parallel?\n\nFSDP extends the distributed data parallel training (DDP) approach by sharding model parameters, gradient and optimizer states into K FSDP units, determined by using a wrapping policy. FSDP achieves large model training efficiency in terms of resources and performance by significantly reducing the memory footprint on each GPU and overlapping computation and communication.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "Resource efficiency is achieved with memory footprint reduction by having all GPUs own a portion of each FSDP unit. To process a given FSDP unit, all GPUs share their locally owned portion via all_gather communication calls.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
+{"page_content": "#### **Integration in torchvision**\nWe\u2019ve also enabled quantization for some of the most popular models in [torchvision](https://github.com/pytorch/vision/tree/master/torchvision/models/quantization): Googlenet, Inception, Resnet, ResNeXt, Mobilenet and Shufflenet. We have upstreamed these changes to torchvision in three forms:\n1. Pre-trained quantized weights so that you can use them right away.\n2. Quantization ready model definitions so that you can do post-training quantization or quantization aware training.\n3. A script for doing quantization aware training \u2014 which is available for any of these model though, as you will learn below, we only found it necessary for achieving accuracy with Mobilenet.\n4. We also have a [tutorial](https://pytorch.org/tutorials/intermediate/quantized_transfer_learning_tutorial.html) showing how you can do transfer learning with quantization using one of the torchvision models.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "### **Choosing an approach**\nThe choice of which scheme to use depends on multiple factors:\n1. Model/Target requirements: Some models might be sensitive to quantization, requiring quantization aware training.\n2. Operator/Backend support: Some backends require fully quantized operators.\n\nCurrently, operator coverage is limited and may restrict the choices listed in the table below:\nThe table below provides a guideline.", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "\n\n \n Model Type | \n Preferred scheme | \n Why | \n
\n \n LSTM/RNN | \n Dynamic Quantization | \n Throughput dominated by compute/memory bandwidth for weights | \n
\n \n BERT/Transformer | \n Dynamic Quantization | \n Throughput dominated by compute/memory bandwidth for weights | \n
\n \n CNN | \n Static Quantization | \n Throughput limited by memory bandwidth for activations | \n
\n \n CNN | \n Quantization Aware Training | \n In the case where accuracy can't be achieved with static quantization | \n
\n
", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "### **Performance Results**\nQuantization provides a 4x reduction in the model size and a speedup of 2x to 3x compared to floating point implementations depending on the hardware platform and the model being benchmarked. Some sample results are:", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n \n Model | \n Float Latency (ms) | \n Quantized Latency (ms) | \n Inference Performance Gain | \n Device | \n Notes | \n
\n \n BERT | \n 581 | \n 313 | \n 1.8x | \n Xeon-D2191 (1.6GHz) | \n Batch size = 1, Maximum sequence length= 128, Single thread, x86-64, Dynamic quantization | \n
\n \n Resnet-50 | \n 214 | \n 103 | \n 2x | \n Xeon-D2191 (1.6GHz) | \n Single thread, x86-64, Static quantization | \n
\n \n Mobilenet-v2 | \n 97 | \n 17 | \n 5.7x | \n Samsung S9 | \n Static quantization, Floating point numbers are based on Caffe2 run-time and are not optimized | \n
\n
\n
", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "### **Accuracy results**\nWe also compared the accuracy of static quantized models with the floating point models on Imagenet. For dynamic quantization, we [compared](https://github.com/huggingface/transformers/blob/master/examples/run_glue.py) the F1 score of BERT on the GLUE benchmark for MRPC.\n\n#### **Computer Vision Model accuracy**", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "\n \n Model | \n Top-1 Accuracy (Float) | \n Top-1 Accuracy (Quantized) | \n Quantization scheme | \n
\n \n Googlenet | \n 69.8 | \n 69.7 | \n Static post training quantization | \n
\n \n Inception-v3 | \n 77.5 | \n 77.1 | \n Static post training quantization | \n
\n \n ResNet-18 | \n 69.8 | \n 69.4 | \n Static post training quantization | \n
\n \n Resnet-50 | \n 76.1 | \n 75.9 | \n Static post training quantization | \n
\n \n ResNext-101 32x8d | \n 79.3 | \n 79 | \n Static post training quantization | \n
\n \n Mobilenet-v2 | \n 71.9 | \n 71.6 | \n Quantization Aware Training | \n
\n \n Shufflenet-v2 | \n 69.4 | \n 68.4 | \n Static post training quantization | \n
\n
", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "#### **Speech and NLP Model accuracy**\n\n\n
\n \n Model | \n F1 (GLUEMRPC) Float | \n F1 (GLUEMRPC) Quantized | \n Quantization scheme | \n
\n \n BERT | \n 0.902 | \n 0.895 | \n Dynamic quantization | \n
\n
\n
", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "### **Conclusion**\nTo get started on quantizing your models in PyTorch, start with [the tutorials on the PyTorch website](https://pytorch.org/tutorials/#model-optimization). If you are working with sequence data start with [dynamic quantization for LSTM](https://pytorch.org/tutorials/advanced/dynamic_quantization_tutorial.html), or [BERT](https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html). If you are working with image data then we recommend starting with the [transfer learning with quantization](https://pytorch.org/tutorials/intermediate/quantized_transfer_learning_tutorial.html) tutorial. Then you can explore [static post training quantization](https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html). If you find that the accuracy drop with post training quantization is too high, then try [quantization aware training](https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html).", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "If you run into issues you can get community help by posting in at [discuss.pytorch.org](https://discuss.pytorch.org/), use the quantization category for quantization related issues.\n\n_This post is authored by Raghuraman Krishnamoorthi, James Reed, Min Ni, Chris Gottbrath and Seth Weidman. Special thanks to Jianyu Huang, Lingyi Liu and Haixin Liu for producing quantization metrics included in this post._\n\n### **Further reading**:\n1. PyTorch quantization presentation at Neurips: [(https://research.fb.com/wp-content/uploads/2019/12/2.-Quantization.pptx)](https://research.fb.com/wp-content/uploads/2019/12/2.-Quantization.pptx)\n2. Quantized Tensors [(https://github.com/pytorch/pytorch/wiki/\nIntroducing-Quantized-Tensor)](https://github.com/pytorch/pytorch/wiki/Introducing-Quantized-Tensor)\n3. Quantization RFC on Github [(https://github.com/pytorch/pytorch/\nissues/18318)](https://github.com/pytorch/pytorch/issues/18318)", "metadata": {"source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"Scaling PyTorch FSDP for Training Foundation Models on IBM Cloud\"\nauthor: Linsong Chu, Less Wright, Hamid Shojanazeri, Sophia Wen, Raghu Ganti, Geeta Chauhan\nfeatured-img: \"/assets/images/scaling-pytorch-fsdp-image1-IBM_scaling_FSDP_visual_new.png\"\n---\n\nLarge model training using a cloud native approach is of growing interest for many enterprises given the emergence and success of [foundation models](https://research.ibm.com/blog/what-are-foundation-models). Some AI practitioners may assume that the only way they can achieve high GPU utilization for distributed training jobs is to run them on HPC systems, such as those inter-connected with Infiniband and may not consider Ethernet connected systems. We demonstrate how the latest distributed training technique, Fully Sharded Data Parallel (FSDP) from PyTorch, successfully scales to models of size 10B+ parameters using commodity Ethernet networking in IBM Cloud.\n\n## PyTorch FSDP Scaling", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
+{"page_content": "As models get larger, the standard techniques for data parallel training work only if the GPU can hold a full replica of the model, along with its training state (optimizer, activations, etc.). However, GPU memory increases have not kept up with the model size increases and new techniques for training such models have emerged (e.g., Fully Sharded Data Parallel, [DeepSpeed](https://www.deepspeed.ai/)), which allow us to efficiently distribute the model and data over multiple GPUs during training. In this blog post, we demonstrate a path to achieve remarkable scaling of model training to 64 nodes (512 GPUs) using PyTorch native FSDP APIs as we increase model sizes to 11B.\n\n### What is Fully Sharded Data Parallel?", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
+{"page_content": "FSDP extends the distributed data parallel training (DDP) approach by sharding model parameters, gradient and optimizer states into K FSDP units, determined by using a wrapping policy. FSDP achieves large model training efficiency in terms of resources and performance by significantly reducing the memory footprint on each GPU and overlapping computation and communication.\n\nResource efficiency is achieved with memory footprint reduction by having all GPUs own a portion of each FSDP unit. To process a given FSDP unit, all GPUs share their locally owned portion via all_gather communication calls.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
{"page_content": "Performance efficiency is accomplished by overlapping all_gather communication calls for upcoming FSDP units with computation of the current FSDP unit. Once the current FSDP unit has been processed, the non-locally owned parameters are dropped, freeing memory for the upcoming FSDP units. This process achieves training efficiency by the overlap of computation and communication, while also reducing the peak memory needed by each GPU.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "In what follows, we demonstrate how FSDP allows us to keep hundreds of GPUs highly utilized throughout a distributed training job, while running over standard Ethernet networking (system description towards the end of the blog). We chose the T5 architecture for our experiments and leveraged the code from the [FSDP workshop](https://github.com/pytorch/workshops/tree/master/FSDP_Workshop). In each of our experiments, we start with a single node experiment to create a baseline and report the metric", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "seconds/iteration normalized by the batch size as well as compute the teraflops based on the [Megatron-LM paper](https://cs.stanford.edu/~matei/papers/2021/sc_megatron_lm.pdf) (see Appendix for details of teraflop computation for T5). Our experiments aim to maximize the batch size (while avoiding cudaMalloc retries) to take full advantage of overlap in computation and communications, as discussed below. Scaling is defined as the ratio of the seconds/iteration normalized by batch size for N nodes versus a", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "single node, representing how well we can utilize the additional GPUs as more nodes are added.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "Experimental Results", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "Our first set of experiments using the T5-3B configuration (mixed precision with BF16, activation checkpointing, and transformer wrapping policy) demonstrated scaling efficiency of 95% as we increased the number of GPUs from 8 to 512 (1 to 64 nodes, respectively). We achieved these results without any modifications to the existing FSDP APIs. We observed that, for this scale, over Ethernet based network, there is sufficient bandwidth to enable continuous overlap of communication and computation.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "However, when we increased the T5 model size to 11B, the scaling efficiency declined substantially to 20%. The PyTorch profiler shows that overlap of communication and computation was very limited. Further investigation into the network bandwidth usage revealed that the poor overlap is being caused by latency in the communication of individual packets and not the bandwidth required (in fact, our peak bandwidth utilization is 1/4th of that available). This led us to hypothesize that if we can increase the", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "compute time by increasing the batch size, we can better overlap communication and computation. However, given we are already at maximum GPU memory allocation, we must identify opportunities to rebalance the memory allocation to allow for increase in batch size. We identified that the model state was being allocated a lot more memory than was needed. The primary function of these reservations is to have pre-reserved memory ready to aggressively send/receive tensors during the communication periods and too", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "few buffers can result in increased wait times, whereas too many buffers result in smaller batch sizes.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "To achieve better efficiency, the PyTorch distributed team introduced a new control knob, the rate_limiter which controls how much memory is allocated for send/receive of tensors, alleviating the memory pressure and providing room for higher batch sizes. In our case, the rate_limiter could increase the batch size from 20 to 50, thus increasing compute time by 2.5x and allowing for much greater overlap of communication and computation. With this fix, we increased the scaling efficiency to >75% (at 32", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "nodes)!", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "Continued investigation into the factors limiting scaling efficiency uncovered that the rate limiter was creating a recurring pipeline bubble of GPU idle time. This was due to the rate limiter using a block and flush approach for the allocation and release of each set of memory buffers. By waiting for the entire block to complete before initiating a new all_gather, the GPU was idling at the start of each block, while waiting for the new set of all_gather parameters to arrive. This bubble was alleviated by", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "moving to a sliding window approach. Upon the completion of a single all_gather step and its computation (rather than a block of them), the memory is freed and the next all_gather is immediately issued in a much more uniform manner. This improvement eliminated the pipeline bubble and boosted the scaling efficiencies to >90% (at 32 nodes).", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n\nFigure 1: Scaling of T5-XL (3B) and T5-XXL (11B) from 1 node to 64 nodes\n
\n\n\n
\n
\n\n\nFigure 2: TFLOPs/sec usage for T5-XL(3B) and T5-XXL (11B) as we increase number of nodes\n
", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "IBM Cloud AI System and Middleware\n\nThe AI infrastructure used for this work is a large-scale AI system on IBM Cloud consisting of nearly 200 nodes, each node with 8 NVIDIA A100 80GB cards, 96 vCPUs, and 1.2TB CPU RAM. The GPU cards within a node are connected via NVLink with a card-to-card bandwidth of 600GBps. Nodes are connected by 2 x 100Gbps Ethernet links with SRIOV based TCP/IP stack, providing a usable bandwidth of 120Gbps.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "The IBM Cloud AI System has been production-ready since May of 2022 and is configured with the OpenShift container platform to run AI workloads. We also built a software stack for production AI workloads that provide end-to-end tools for training workloads. The middleware leverages Ray for pre and post processing workloads and PyTorch for training of models. We also integrate a Kubernetes native scheduler, MCAD, that manages multiple jobs with job queuing, gang scheduling, prioritization, and quota", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "management. A multi-NIC CNI discovers all available network interfaces and handles them as a single NIC pool enabling optimized use of the network interfaces in Kubernetes. Finally, CodeFlare CLI supports a single pane for observability of the full stack using a desktop CLI (e.g., GPU utilization, application metrics like loss, gradient norm).", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n\nFigure 3: Foundation Model Middleware Stack\n
", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "Conclusion and Future Work", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "In conclusion, we demonstrated how we can achieve remarkable scaling of FSDP APIs over non-InfiniBand networks. We identified the bottleneck that had limited scaling to less than 20% efficiency for 11B parameter model training. After identifying the issue, we were able to correct this with a new rate limiter control to ensure a more optimal balance of reserved memory and communication overlap relative to compute time. With this improvement, we were able to achieve 90% scaling efficiency (a 4.5x", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "improvement), at 256 GPUs and 80% at 512 GPUs for training of the 11B parameter model. In addition, the 3B parameter model scales extremely well with 95% efficiency even as we increase the number of GPUs to 512.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "This is a first in the industry to achieve such scaling efficiencies for up to 11B parameter models using Kubernetes with vanilla Ethernet and PyTorch native FSDP API\u2019s. This improvement enables users to train huge models on a Hybrid Cloud platform in a cost efficient and sustainable manner.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "We plan on continuing to investigate scaling with decoder only models and increasing the size of these models to 100B+ parameters. From a system design perspective, we are exploring capabilities such as RoCE and GDR that can improve latencies of communications over Ethernet networks.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "Acknowledgements\n\nThis blog was possible because of contributions from both PyTorch Distributed and IBM Research teams.\n\nFrom the PyTorch Distributed team, we would like to thank Less Wright, Hamid Shojanazeri, Geeta Chauhan, Shen Li, Rohan Varma, Yanli Zhao, Andrew Gu, Anjali Sridhar, Chien-Chin Huang, and Bernard Nguyen.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "From the IBM Research team, we would like to thank Linsong Chu, Sophia Wen, Lixiang (Eric) Luo, Marquita Ellis, Davis Wertheimer, Supriyo Chakraborty, Raghu Ganti, Mudhakar Srivatsa, Seetharami Seelam, Carlos Costa, Abhishek Malvankar, Diana Arroyo, Alaa Youssef, Nick Mitchell.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "Appendix", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "Teraflop computation\n\nThe T5-XXL (11B) architecture has two types of T5 blocks, one is an encoder and the second is a decoder. Following the approach of Megatron-LM, where each matrix multiplication requires 2m\u00d7k\u00d7n FLOPs, where the first matrix is of size m\u00d7k and the second is k\u00d7n. The encoder block consists of self-attention and feed forward layers, whereas the decoder block consists of self-attention, cross-attention, and feed forward layers.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "The attention (both self and cross) block consists of a QKV projection, which requires 6Bsh2 operations, an attention matrix computation requiring 2Bs2h operations, an attention over values which needs 2Bs2h computations, and the post-attention linear projection requires 2Bsh2 operations. Finally, the feed forward layer requires 15Bsh2 operations.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "The total for an encoder block is 23Bsh2+4Bs2h, whereas for a decoder block, it comes to 31Bsh2+8Bs2h. With a total of 24 encoder and 24 decoder blocks and 2 forward passes (as we discard the activations) and one backward pass (equivalent to two forward passes), the final FLOPs computation comes to be 96\u00d7(54Bsh2+ 12Bs2h) + 6BshV. Here, B is the batch size per GPU, s is sequence length, h is hidden state size, and V is vocabulary size.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "We repeat a similar computation for T5-XL (3B) architecture, which is slightly different.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"Empowering PyTorch on Intel\u00ae Xeon\u00ae Scalable processors with Bfloat16\"\nauthor: Mingfei Ma (Intel), Vitaly Fedyunin (Meta), Wei Wei (Meta)\nfeatured-img: '\\assets\\images\\empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16.png'\n---", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "Overview", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "Recent years, the growing complexity of AI models have been posing requirements on hardware for more and more compute capability. Reduced precision numeric format has been proposed to address this problem. Bfloat16 is a custom 16-bit floating point format for AI which consists of one sign bit, eight exponent bits, and seven mantissa bits. With the same dynamic range as float32, bfloat16 doesn\u2019t require a special handling such as loss scaling. Therefore, bfloat16 is a drop-in replacement for float32 when", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "running deep neural networks for both inference and training.", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "The 3rd Gen Intel\u00ae Xeon\u00ae Scalable processor (codenamed Cooper Lake), is the first general purpose x86 CPU with native bfloat16 support. Three new bfloat16 instructions were introduced in Intel\u00ae Advanced Vector Extensions-512 (Intel\u00ae AVX-512): VCVTNE2PS2BF16, VCVTNEPS2BF16, and VDPBF16PS. The first two instructions perform conversion from float32 to bfloat16, and the last one performs a dot product of bfloat16 pairs. Bfloat16 theoretical compute throughput is", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "doubled over float32 on Cooper Lake. On the next generation of Intel\u00ae Xeon\u00ae Scalable Processors, bfloat16 compute throughput will be further enhanced through Advanced Matrix Extensions (Intel\u00ae AMX) instruction set extension.", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "Intel and Meta previously collaborated to enable bfloat16 on PyTorch, and the related work was published in an earlier [blog](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Intel-and-Facebook-Accelerate-PyTorch-Performance-with-3rd-Gen/post/1335659) during launch of Cooper Lake. In that blog, we introduced the hardware advancement for native bfloat16 support and showcased a performance boost of 1.4x to 1.6x of bfloat16 over float32 from DLRM, ResNet-50 and ResNext-101-32x4d.", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "In this blog, we will introduce the latest software enhancement on bfloat16 in PyTorch 1.12, which would apply to much broader scope of user scenarios and showcase even higher performance boost.", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "Native Level Optimization on Bfloat16", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "On PyTorch CPU bfloat16 path, the compute intensive operators, e.g., convolution, linear and bmm, use oneDNN (oneAPI Deep Neural Network Library) to achieve optimal performance on Intel CPUs with AVX512_BF16 or AMX support. The other operators, such as tensor operators and neural network operators, are optimized at PyTorch native level. We have enlarged bfloat16 kernel level optimizations to majority of operators on dense tensors, both inference and training applicable (sparse tensor bfloat16 support will", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "be covered in future work), specifically:", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "- **Bfloat16 vectorization**: Bfloat16 is stored as unsigned 16-bit integer, which requires it to be casted to float32 for arithmetic operations such as add, mul, etc. Specifically, each bfloat16 vector will be converted to two float32 vectors, processed accordingly and then converted back. While for non-arithmetic operations such as cat, copy, etc., it is a straight memory copy and no data type conversion will be involved.", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "- **Bfloat16 reduction**: Reduction on bfloat16 data uses float32 as accumulation type to guarantee numerical stability, e.g., sum, BatchNorm2d, MaxPool2d, etc.\n- **Channels Last optimization**: For vision models, Channels Last is the preferable memory format over Channels First from performance perspective. We have implemented fully optimized CPU kernels for all the commonly used CV modules on channels last memory format, taking care of both float32 and bfloat16.", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "Run Bfloat16 with Auto Mixed Precision\n\nTo run model on bfloat16, typically user can either explicitly convert the data and model to bfloat16, for example:\n\n```console\n# with explicit conversion\ninput = input.to(dtype=torch.bfloat16)\nmodel = model.to(dtype=torch.bfloat16)", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "or utilize torch.amp (Automatic Mixed Precision) package. The autocast instance serves as context managers or decorators that allow regions of your script to run in mixed precision, for example:\n\n```console\n# with AMP\nwith torch.autocast(device_type=\"cpu\", dtype=torch.bfloat16):\n output = model(input)", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "Generally, the explicit conversion approach and AMP approach have similar performance. Even though, we recommend run bfloat16 models with AMP, because:\n\n- **Better user experience with automatic fallback**: If your script includes operators that don\u2019t have bfloat16 support, autocast will implicitly convert them back to float32 while the explicit converted model will give a runtime error.", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "- **Mixed data type for activation and parameters**: Unlike the explicit conversion which converts all the model parameters to bfloat16, AMP mode will run in mixed data type. To be specific, input/output will be kept in bfloat16 while parameters, e.g., weight/bias, will be kept in float32. The mixed data type of activation and parameters will help improve performance while maintaining the accuracy.", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "Performance Gains\n\nWe benchmarked inference performance of TorchVision models on Intel\u00ae Xeon\u00ae Platinum 8380H CPU @ 2.90GHz (codenamed Cooper Lake), single instance per socket (batch size = 2 x number of physical cores). Results show that bfloat16 has 1.4x to 2.2x performance gain over float32.\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "The performance boost of bfloat16 over float32 primarily comes from 3 aspects:", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "- The compute intensive operators take advantage of the new bfloat16 native instruction VDPBF16PS which doubles the hardware compute throughput.\n- Bfloat16 have only half the memory footprint of float32, so theoretically the memory bandwidth intensive operators will be twice faster.", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "- On Channels Last, we intentionally keep the same parallelization scheme for all the memory format aware operators (can\u2019t do this on Channels First though), which increases the data locality when passing each layer\u2019s output to the next. Basically, it keeps the data closer to CPU cores while data would reside in cache anyway. And bfloat16 will have a higher cache hit rate compared with float32 in such scenarios due to smaller memory footprint.", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "Conclusion & Future Work", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "In this blog, we introduced recent software optimizations on bfloat16 introduced in PyTorch 1.12. Results on the 3rd Gen Intel\u00ae Xeon\u00ae Scalable processor show that bfloat16 has 1.4x to 2.2x performance gain over float32 on the TorchVision models. Further improvement is expected on the next generation of Intel\u00ae Xeon\u00ae Scalable Processors with AMX instruction support. Though the performance number for this blog is collected with TorchVision models, the benefit is", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "broad across all topologies. And we will continue to extend the bfloat16 optimization effort to a broader scope in the future!", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "Acknowledgement\n\nThe results presented in this blog is a joint effort of Meta and Intel PyTorch team. Special thanks to Vitaly Fedyunin and Wei Wei from Meta who spent precious time and gave substantial assistance! Together we made one more step on the path of improving the PyTorch CPU eco system.", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "Reference", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "- [The bfloat16 numerical format](https://cloud.google.com/tpu/docs/bfloat16?hl=en)\n- [https://pytorch.org/docs/master/amp.html#torch.autocast](https://pytorch.org/docs/master/amp.html#torch.autocast)\n- [Intel and Facebook Accelerate PyTorch Performance with 3rd Gen Intel\u00ae Xeon\u00ae Processors and Intel\u00ae Deep Learning Boost\u2019s new BFloat16 capability](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Intel-and-Facebook-Accelerate-PyTorch-Performance-with-3rd-Gen/post/1335659)", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"Announcing PyTorch Conference 2022\"\nauthor:\nfeatured-img: \"/assets/images/pytorch-conference-2022.png\"\n---", "metadata": {"source": "https://pytorch.org/blog/announcing-pytorch-conference-2022/", "category": "pytorch blogs"}}
-{"page_content": "We are excited to announce that the PyTorch Conference returns in-person as a satellite event to", "metadata": {"source": "https://pytorch.org/blog/announcing-pytorch-conference-2022/", "category": "pytorch blogs"}}
-{"page_content": "[NeurlPS](https://l.workplace.com/l.php?u=https%3A%2F%2Fnips.cc%2F&h=AT3cdRwSEhyuNXpH2ptWjk-KxMxcceaYeTfflT6PEezDQ_zeUxRv1gjX7GhTQBgvZxFAR0wlSBwuhpipdMjUknMnhY5oJ5C4HjLNO40-12UnoeYALriwrvdxGfgigo8KYlWu_gRIQwlO-2r0wTnNft0whoSaOdVAxw&__tn__=-UK-R&c[0]=AT3z6QRLu8Uw48lKQ_P6FFq7ncHfjsfI16OGZvWO9kALatCY4sZcMjNzR7a4OiOG25RKVHpDX0TGutZHyM_R8Kl2s71Y3DEbq5QccmUVaSzCbcMUSc5Ms2zXHoeGxUlw1XirihAydPsX4Y1OmF6GRjqH8YFTNTFQRN3I8j2SFhR8LEUDxDmfnZ8Q7c2hXi0HeGc) (Neural Information Processing Systems) in New Orleans on Dec.", "metadata": {"source": "https://pytorch.org/blog/announcing-pytorch-conference-2022/", "category": "pytorch blogs"}}
-{"page_content": "2nd.", "metadata": {"source": "https://pytorch.org/blog/announcing-pytorch-conference-2022/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
", "metadata": {"source": "https://pytorch.org/blog/announcing-pytorch-conference-2022/", "category": "pytorch blogs"}}
-{"page_content": "We changed the name from PyTorch Developer Day to PyTorch Conference to signify the turning of a new chapter as we look to the future of PyTorch, encompassing the entire PyTorch Community. This conference will bring together leading researchers, academics and developers from the Machine Learning (ML) and Deep Learning (DL) communities to join a multiple set of talks and a poster session; covering new software releases on [PyTorch](https://pytorch.org/), use cases in academia and industry, as well as ML/DL", "metadata": {"source": "https://pytorch.org/blog/announcing-pytorch-conference-2022/", "category": "pytorch blogs"}}
-{"page_content": "development and production trends.", "metadata": {"source": "https://pytorch.org/blog/announcing-pytorch-conference-2022/", "category": "pytorch blogs"}}
-{"page_content": "EVENT OVERVIEW\n\nWhen: Dec 2nd, 2022 (In-Person and Virtual)\n\nWhere: New Orleans, Louisiana (USA) | *Virtual option as well*", "metadata": {"source": "https://pytorch.org/blog/announcing-pytorch-conference-2022/", "category": "pytorch blogs"}}
-{"page_content": "SCHEDULE\n\nAll times in Central Standard.\n\n8:00-9:00 am Registration/Check in\n\n9:00-11:20 am Keynote & Technical Talks\n\n11:30-1:00 pm Lunch\n\n1:00-3:00 pm Poster Session & Breakouts\n\n3:00-4:00 pm Community/Partner Talks\n\n4:00-5:00 pm Panel Discussion\n\nAgenda subject to change.", "metadata": {"source": "https://pytorch.org/blog/announcing-pytorch-conference-2022/", "category": "pytorch blogs"}}
-{"page_content": "All talks will be livestreamed and available to the public. The in-person event will be by invitation only as space is limited. If you\u2019d like to apply to attend in person, please submit all requests [here](https://pytorchconference22.splashthat.com/).", "metadata": {"source": "https://pytorch.org/blog/announcing-pytorch-conference-2022/", "category": "pytorch blogs"}}
-{"page_content": "LINKS\n\n- [Submit Content for Consideration by Sept. 30th](https://docs.google.com/forms/d/121ptOuhqhmcPev9g5Zt2Ffl-NtB_oeyFk5CWjumUVLQ/edit)\n- [Livestream event page](https://www.facebook.com/events/1562940847455759)\n- [Apply for an invitation to the in-person event](https://pytorchconference22.splashthat.com/)", "metadata": {"source": "https://pytorch.org/blog/announcing-pytorch-conference-2022/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'New PyTorch library releases including TorchVision Mobile, TorchAudio I/O, and more'\nauthor: Team PyTorch \n---", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Today, we are announcing updates to a number of PyTorch libraries, alongside the [PyTorch 1.8 release](https://pytorch.org/blog/pytorch-1.8-released). The updates include new releases for the domain libraries including TorchVision, TorchText and TorchAudio as well as new version of TorchCSPRNG. These releases include a number of new features and improvements and, along with the PyTorch 1.8 release, provide a broad set of updates for the PyTorch community to build on and leverage.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Some highlights include:\n* **TorchVision** - Added support for PyTorch Mobile including [Detectron2Go](https://ai.facebook.com/blog/d2go-brings-detectron2-to-mobile) (D2Go), auto-augmentation of data during training, on the fly type conversion, and [AMP autocasting](https://pytorch.org/docs/stable/amp.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "* **TorchAudio** - Major improvements to I/O, including defaulting to sox_io backend and file-like object support. Added Kaldi Pitch feature and support for CMake based build allowing TorchAudio to better support no-Python environments.\n* **TorchText** - Updated the dataset loading API to be compatible with standard PyTorch data loading utilities.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "* **TorchCSPRNG** - Support for cryptographically secure pseudorandom number generators for PyTorch is now stable with new APIs for AES128 ECB/CTR and CUDA support on Windows.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Please note that, starting in PyTorch 1.6, features are classified as Stable, Beta, and Prototype. Prototype features are not included as part of the binary distribution and are instead available through either building from source, using nightlies or via compiler flag. You can see the detailed announcement [here](https://pytorch.org/blog/pytorch-feature-classification-changes/).\n\n\n# TorchVision 0.9.0", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "[Stable] TorchVision Mobile: Operators, Android Binaries, and Tutorial", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "We are excited to announce the first on-device support and binaries for a PyTorch domain library. We have seen significant appetite in both research and industry for on-device vision support to allow low latency, privacy friendly, and resource efficient mobile vision experiences. You can follow this [new tutorial](https://github.com/pytorch/android-demo-app/tree/master/D2Go) to build your own Android object detection app using TorchVision operators, D2Go, or your own custom operators and model.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "[Stable] New Mobile models for Classification, Object Detection and Semantic Segmentation\nWe have added support for the MobileNetV3 architecture and provided pre-trained weights for Classification, Object Detection and Segmentation. It is easy to get up and running with these models, just import and load them as you would any ```torchvision``` model:\n```python\nimport torch\nimport torchvision", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "# Classification\nx = torch.rand(1, 3, 224, 224)\nm_classifier = torchvision.models.mobilenet_v3_large(pretrained=True)\nm_classifier.eval()\npredictions = m_classifier(x)\n\n# Quantized Classification\nx = torch.rand(1, 3, 224, 224)\nm_classifier = torchvision.models.quantization.mobilenet_v3_large(pretrained=True)\nm_classifier.eval()\npredictions = m_classifier(x)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "# Object Detection: Highly Accurate High Resolution Mobile Model\nx = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]\nm_detector = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(pretrained=True)\nm_detector.eval()\npredictions = m_detector(x)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "# Semantic Segmentation: Highly Accurate Mobile Model\nx = torch.rand(1, 3, 520, 520)\nm_segmenter = torchvision.models.segmentation.deeplabv3_mobilenet_v3_large(pretrained=True)\nm_segmenter.eval()\npredictions = m_segmenter(x)\n```\nThese models are highly competitive with TorchVision\u2019s existing models on resource efficiency, speed, and accuracy. See our [release notes](https://github.com/pytorch/vision/releases) for detailed performance metrics.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "[Stable] AutoAugment", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "[AutoAugment](https://arxiv.org/pdf/1805.09501.pdf) is a common Data Augmentation technique that can increase the accuracy of Scene Classification models. Though the data augmentation policies are directly linked to their trained dataset, empirical studies show that ImageNet policies provide significant improvements when applied to other datasets. We\u2019ve implemented 3 policies learned on the following datasets: ImageNet, CIFA10 and SVHN. These can be used standalone or mixed-and-matched with existing", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "transforms:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "```python\nfrom torchvision import transforms", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "t = transforms.AutoAugment()\ntransformed = t(image)\n\n\ntransform=transforms.Compose([\n transforms.Resize(256),\n transforms.AutoAugment(),\n transforms.ToTensor()])\n```", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Other New Features for TorchVision\n* [Stable] All read and decode methods in the io.image package now support:\n * Palette, Grayscale Alpha and RBG Alpha image types during PNG decoding\n * On-the-fly conversion of image from one type to the other during read\n* [Stable] WiderFace dataset\n* [Stable] Improved FasterRCNN speed and accuracy by introducing a score threshold on RPN\n* [Stable] Modulation input for DeformConv2D\n* [Stable] Option to write audio to a video file", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "* [Stable] Utility to draw bounding boxes\n* [Beta] Autocast support in all Operators\nFind the full TorchVision release notes [here](https://github.com/pytorch/vision/releases).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "# TorchAudio 0.8.0", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "I/O Improvements\nWe have continued our work from the [previous release](https://github.com/pytorch/audio/releases/tag/v0.7.0) to improve TorchAudio\u2019s I/O support, including:\n* [Stable] Changing the default backend to \u201csox_io\u201d (for Linux/macOS), and updating the \u201csoundfile\u201d backend\u2019s interface to align with that of \u201csox_io\u201d. The legacy backend and interface are still accessible, though it is strongly discouraged to use them.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "* [Stable] File-like object support in both \"sox_io\" backend, \u201csoundfile\u201d backend and sox_effects.\n* [Stable] New options to change the format, encoding, and bits_per_sample when saving.\n* [Stable] Added GSM, HTK, AMB, AMR-NB and AMR-WB format support to the \u201csox_io\u201d backend.\n* [Beta] A new ```functional.apply_codec``` function which can degrade audio data by applying audio codecs supported by \u201csox_io\u201d backend in an in-memory fashion.\nHere are some examples of features landed in this release:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "```python\n# Load audio over HTTP\nwith requests.get(URL, stream=True) as response:\n waveform, sample_rate = torchaudio.load(response.raw)\n \n# Saving to Bytes buffer as 32-bit floating-point PCM\nbuffer_ = io.BytesIO()\ntorchaudio.save(\n buffer_, waveform, sample_rate,\n format=\"wav\", encoding=\"PCM_S\", bits_per_sample=16)\n \n# Apply effects while loading audio from S3\nclient = boto3.client('s3')\nresponse = client.get_object(Bucket=S3_BUCKET, Key=S3_KEY)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "waveform, sample_rate = torchaudio.sox_effects.apply_effect_file(\n response['Body'],\n [[\"lowpass\", \"-1\", \"300\"], [\"rate\", \"8000\"]])\n \n# Apply GSM codec to Tensor\nencoded = torchaudio.functional.apply_codec(\n waveform, sample_rate, format=\"gsm\")", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Check out the revamped audio preprocessing tutorial, [Audio Manipulation with TorchAudio](https://pytorch.org/tutorials/beginner/audio_preprocessing_tutorial.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "[Stable] Switch to CMake-based build\nIn the previous version of TorchAudio, it was utilizing CMake to build third party dependencies. Starting in 0.8.0, TorchaAudio uses CMake to build its C++ extension. This will open the door to integrate TorchAudio in non-Python environments (such as C++ applications and mobile). We will continue working on adding example applications and mobile integrations.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "[Beta] Improved and New Audio Transforms", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "We have added two widely requested operators in this release: the SpectralCentroid transform and the Kaldi Pitch feature extraction (detailed in [\"A pitch extraction algorithm tuned for automatic speech recognition\"](https://ieeexplore.ieee.org/document/6854049)). We\u2019ve also exposed a normalization method to Mel transforms, and additional STFT arguments to Spectrogram. We would like to ask our community to continue to [raise feature", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "requests](https://github.com/pytorch/audio/issues/new?assignees=&labels=&template=feature-request.md) for core audio processing features like these!", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Community Contributions", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "We had more contributions from the open source community in this release than ever before, including several completely new features. We would like to extend our sincere thanks to the community. Please check out the newly added [CONTRIBUTING.md](https://github.com/pytorch/audio/blob/master/CONTRIBUTING.md) for ways to contribute code, and remember that reporting bugs and requesting features are just as valuable. We will continue posting well-scoped work items as issues labeled \u201chelp-wanted\u201d and", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "\u201ccontributions-welcome\u201d for anyone who would like to contribute code, and are happy to coach new contributors through the contribution process.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Find the full TorchAudio release notes [here](https://github.com/pytorch/audio/releases).\n\n# TorchText 0.9.0", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "[Beta] Dataset API Updates", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "In this release, we are updating TorchText\u2019s dataset API to be compatible with PyTorch data utilities, such as DataLoader, and are deprecating TorchText\u2019s custom data abstractions such as ```Field```. The updated datasets are simple string-by-string iterators over the data. For guidance about migrating from the legacy abstractions to use modern PyTorch data utilities, please refer to our [migration guide](https://github.com/pytorch/text/blob/master/examples/legacy_tutorial/migration_tutorial.ipynb).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "The text datasets listed below have been updated as part of this work. For examples of how to use these datasets, please refer to our [end-to-end text classification tutorial](https://pytorch.org/tutorials/beginner/text_sentiment_ngrams_tutorial.html).\n* **Language modeling:** WikiText2, WikiText103, PennTreebank, EnWik9\n* **Text classification:** AG_NEWS, SogouNews, DBpedia, YelpReviewPolarity, YelpReviewFull, YahooAnswers, AmazonReviewPolarity, AmazonReviewFull, IMDB", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "* **Sequence tagging:** UDPOS, CoNLL2000Chunking\n* **Translation:** IWSLT2016, IWSLT2017\n* **Question answer:** SQuAD1, SQuAD2", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Find the full TorchText release notes [here](https://github.com/pytorch/text/releases).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "# [Stable] TorchCSPRNG 0.2.0\nWe [released TorchCSPRNG in August 2020](https://pytorch.org/blog/torchcsprng-release-blog/), a PyTorch C++/CUDA extension that provides cryptographically secure pseudorandom number generators for PyTorch. Today, we are releasing the 0.2.0 version and designating the library as stable. This release includes a new API for encrypt/decrypt with AES128 ECB/CTR as well as CUDA 11 and Windows CUDA support.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
-{"page_content": "Find the full TorchCSPRNG release notes [here](https://github.com/pytorch/csprng/releases/).\n\n\n\n\n\n\n\nThanks for reading, and if you are excited about these updates and want to participate in the future of PyTorch, we encourage you to join the [discussion forums](https://discuss.pytorch.org/) and [open GitHub issues](https://github.com/pytorch).\n\nCheers!\n\n***Team PyTorch***", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "In what follows, we demonstrate how FSDP allows us to keep hundreds of GPUs highly utilized throughout a distributed training job, while running over standard Ethernet networking (system description towards the end of the blog). We chose the T5 architecture for our experiments and leveraged the code from the [FSDP workshop](https://github.com/pytorch/workshops/tree/master/FSDP_Workshop). In each of our experiments, we start with a single node experiment to create a baseline and report the metric seconds/iteration normalized by the batch size as well as compute the teraflops based on the [Megatron-LM paper](https://cs.stanford.edu/~matei/papers/2021/sc_megatron_lm.pdf) (see Appendix for details of teraflop computation for T5). Our experiments aim to maximize the batch size (while avoiding cudaMalloc retries) to take full advantage of overlap in computation and communications, as discussed below. Scaling is defined as the ratio of the seconds/iteration normalized by batch size for N nodes versus a single node, representing how well we can utilize the additional GPUs as more nodes are added.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
+{"page_content": "### Experimental Results\n\nOur first set of experiments using the T5-3B configuration (mixed precision with BF16, activation checkpointing, and transformer wrapping policy) demonstrated scaling efficiency of 95% as we increased the number of GPUs from 8 to 512 (1 to 64 nodes, respectively). We achieved these results without any modifications to the existing FSDP APIs. We observed that, for this scale, over Ethernet based network, there is sufficient bandwidth to enable continuous overlap of communication and computation.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
+{"page_content": "However, when we increased the T5 model size to 11B, the scaling efficiency declined substantially to 20%. The PyTorch profiler shows that overlap of communication and computation was very limited. Further investigation into the network bandwidth usage revealed that the poor overlap is being caused by latency in the communication of individual packets and not the bandwidth required (in fact, our peak bandwidth utilization is 1/4th of that available). This led us to hypothesize that if we can increase the compute time by increasing the batch size, we can better overlap communication and computation. However, given we are already at maximum GPU memory allocation, we must identify opportunities to rebalance the memory allocation to allow for increase in batch size. We identified that the model state was being allocated a lot more memory than was needed. The primary function of these reservations is to have pre-reserved memory ready to aggressively send/receive tensors during the communication periods and too few buffers can result in increased wait times, whereas too many buffers result in smaller batch sizes.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
+{"page_content": "To achieve better efficiency, the PyTorch distributed team introduced a new control knob, the rate_limiter which controls how much memory is allocated for send/receive of tensors, alleviating the memory pressure and providing room for higher batch sizes. In our case, the rate_limiter could increase the batch size from 20 to 50, thus increasing compute time by 2.5x and allowing for much greater overlap of communication and computation. With this fix, we increased the scaling efficiency to >75% (at 32 nodes)!", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
+{"page_content": "Continued investigation into the factors limiting scaling efficiency uncovered that the rate limiter was creating a recurring pipeline bubble of GPU idle time. This was due to the rate limiter using a block and flush approach for the allocation and release of each set of memory buffers. By waiting for the entire block to complete before initiating a new all_gather, the GPU was idling at the start of each block, while waiting for the new set of all_gather parameters to arrive. This bubble was alleviated by moving to a sliding window approach. Upon the completion of a single all_gather step and its computation (rather than a block of them), the memory is freed and the next all_gather is immediately issued in a much more uniform manner. This improvement eliminated the pipeline bubble and boosted the scaling efficiencies to >90% (at 32 nodes).\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
+{"page_content": "\nFigure 1: Scaling of T5-XL (3B) and T5-XXL (11B) from 1 node to 64 nodes\n
\n\n\n
\n
\n\n\nFigure 2: TFLOPs/sec usage for T5-XL(3B) and T5-XXL (11B) as we increase number of nodes\n
\n\n## IBM Cloud AI System and Middleware\n\nThe AI infrastructure used for this work is a large-scale AI system on IBM Cloud consisting of nearly 200 nodes, each node with 8 NVIDIA A100 80GB cards, 96 vCPUs, and 1.2TB CPU RAM. The GPU cards within a node are connected via NVLink with a card-to-card bandwidth of 600GBps. Nodes are connected by 2 x 100Gbps Ethernet links with SRIOV based TCP/IP stack, providing a usable bandwidth of 120Gbps.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
+{"page_content": "The IBM Cloud AI System has been production-ready since May of 2022 and is configured with the OpenShift container platform to run AI workloads. We also built a software stack for production AI workloads that provide end-to-end tools for training workloads. The middleware leverages Ray for pre and post processing workloads and PyTorch for training of models. We also integrate a Kubernetes native scheduler, MCAD, that manages multiple jobs with job queuing, gang scheduling, prioritization, and quota management. A multi-NIC CNI discovers all available network interfaces and handles them as a single NIC pool enabling optimized use of the network interfaces in Kubernetes. Finally, CodeFlare CLI supports a single pane for observability of the full stack using a desktop CLI (e.g., GPU utilization, application metrics like loss, gradient norm).\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
+{"page_content": "\nFigure 3: Foundation Model Middleware Stack\n
\n\n### Conclusion and Future Work\n\nIn conclusion, we demonstrated how we can achieve remarkable scaling of FSDP APIs over non-InfiniBand networks. We identified the bottleneck that had limited scaling to less than 20% efficiency for 11B parameter model training. After identifying the issue, we were able to correct this with a new rate limiter control to ensure a more optimal balance of reserved memory and communication overlap relative to compute time. With this improvement, we were able to achieve 90% scaling efficiency (a 4.5x improvement), at 256 GPUs and 80% at 512 GPUs for training of the 11B parameter model. In addition, the 3B parameter model scales extremely well with 95% efficiency even as we increase the number of GPUs to 512.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
+{"page_content": "This is a first in the industry to achieve such scaling efficiencies for up to 11B parameter models using Kubernetes with vanilla Ethernet and PyTorch native FSDP API\u2019s. This improvement enables users to train huge models on a Hybrid Cloud platform in a cost efficient and sustainable manner.\n\nWe plan on continuing to investigate scaling with decoder only models and increasing the size of these models to 100B+ parameters. From a system design perspective, we are exploring capabilities such as RoCE and GDR that can improve latencies of communications over Ethernet networks.\n\n## Acknowledgements\n\nThis blog was possible because of contributions from both PyTorch Distributed and IBM Research teams.\n\nFrom the PyTorch Distributed team, we would like to thank Less Wright, Hamid Shojanazeri, Geeta Chauhan, Shen Li, Rohan Varma, Yanli Zhao, Andrew Gu, Anjali Sridhar, Chien-Chin Huang, and Bernard Nguyen.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
+{"page_content": "From the IBM Research team, we would like to thank Linsong Chu, Sophia Wen, Lixiang (Eric) Luo, Marquita Ellis, Davis Wertheimer, Supriyo Chakraborty, Raghu Ganti, Mudhakar Srivatsa, Seetharami Seelam, Carlos Costa, Abhishek Malvankar, Diana Arroyo, Alaa Youssef, Nick Mitchell.\n\n## Appendix\n\n#### Teraflop computation\n\nThe T5-XXL (11B) architecture has two types of T5 blocks, one is an encoder and the second is a decoder. Following the approach of Megatron-LM, where each matrix multiplication requires 2m\u00d7k\u00d7n FLOPs, where the first matrix is of size m\u00d7k and the second is k\u00d7n. The encoder block consists of self-attention and feed forward layers, whereas the decoder block consists of self-attention, cross-attention, and feed forward layers.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
+{"page_content": "The attention (both self and cross) block consists of a QKV projection, which requires 6Bsh2 operations, an attention matrix computation requiring 2Bs2h operations, an attention over values which needs 2Bs2h computations, and the post-attention linear projection requires 2Bsh2 operations. Finally, the feed forward layer requires 15Bsh2 operations. \n\nThe total for an encoder block is 23Bsh2+4Bs2h, whereas for a decoder block, it comes to 31Bsh2+8Bs2h. With a total of 24 encoder and 24 decoder blocks and 2 forward passes (as we discard the activations) and one backward pass (equivalent to two forward passes), the final FLOPs computation comes to be 96\u00d7(54Bsh2+ 12Bs2h) + 6BshV. Here, B is the batch size per GPU, s is sequence length, h is hidden state size, and V is vocabulary size. \nWe repeat a similar computation for T5-XL (3B) architecture, which is slightly different.", "metadata": {"source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"Empowering PyTorch on Intel\u00ae Xeon\u00ae Scalable processors with Bfloat16\"\nauthor: Mingfei Ma (Intel), Vitaly Fedyunin (Meta), Wei Wei (Meta)\nfeatured-img: '\\assets\\images\\empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16.png'\n---\n\n## Overview\n\nRecent years, the growing complexity of AI models have been posing requirements on hardware for more and more compute capability. Reduced precision numeric format has been proposed to address this problem. Bfloat16 is a custom 16-bit floating point format for AI which consists of one sign bit, eight exponent bits, and seven mantissa bits. With the same dynamic range as float32, bfloat16 doesn\u2019t require a special handling such as loss scaling. Therefore, bfloat16 is a drop-in replacement for float32 when running deep neural networks for both inference and training.", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
+{"page_content": "The 3rd Gen Intel\u00ae Xeon\u00ae Scalable processor (codenamed Cooper Lake), is the first general purpose x86 CPU with native bfloat16 support. Three new bfloat16 instructions were introduced in Intel\u00ae Advanced Vector Extensions-512 (Intel\u00ae AVX-512): VCVTNE2PS2BF16, VCVTNEPS2BF16, and VDPBF16PS. The first two instructions perform conversion from float32 to bfloat16, and the last one performs a dot product of bfloat16 pairs. Bfloat16 theoretical compute throughput is doubled over float32 on Cooper Lake. On the next generation of Intel\u00ae Xeon\u00ae Scalable Processors, bfloat16 compute throughput will be further enhanced through Advanced Matrix Extensions (Intel\u00ae AMX) instruction set extension.", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
+{"page_content": "Intel and Meta previously collaborated to enable bfloat16 on PyTorch, and the related work was published in an earlier [blog](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Intel-and-Facebook-Accelerate-PyTorch-Performance-with-3rd-Gen/post/1335659) during launch of Cooper Lake. In that blog, we introduced the hardware advancement for native bfloat16 support and showcased a performance boost of 1.4x to 1.6x of bfloat16 over float32 from DLRM, ResNet-50 and ResNext-101-32x4d.\n\nIn this blog, we will introduce the latest software enhancement on bfloat16 in PyTorch 1.12, which would apply to much broader scope of user scenarios and showcase even higher performance boost.\n\n## Native Level Optimization on Bfloat16", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
+{"page_content": "On PyTorch CPU bfloat16 path, the compute intensive operators, e.g., convolution, linear and bmm, use oneDNN (oneAPI Deep Neural Network Library) to achieve optimal performance on Intel CPUs with AVX512_BF16 or AMX support. The other operators, such as tensor operators and neural network operators, are optimized at PyTorch native level. We have enlarged bfloat16 kernel level optimizations to majority of operators on dense tensors, both inference and training applicable (sparse tensor bfloat16 support will be covered in future work), specifically:", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
+{"page_content": "- **Bfloat16 vectorization**: Bfloat16 is stored as unsigned 16-bit integer, which requires it to be casted to float32 for arithmetic operations such as add, mul, etc. Specifically, each bfloat16 vector will be converted to two float32 vectors, processed accordingly and then converted back. While for non-arithmetic operations such as cat, copy, etc., it is a straight memory copy and no data type conversion will be involved.\n- **Bfloat16 reduction**: Reduction on bfloat16 data uses float32 as accumulation type to guarantee numerical stability, e.g., sum, BatchNorm2d, MaxPool2d, etc.\n- **Channels Last optimization**: For vision models, Channels Last is the preferable memory format over Channels First from performance perspective. We have implemented fully optimized CPU kernels for all the commonly used CV modules on channels last memory format, taking care of both float32 and bfloat16.\n\n## Run Bfloat16 with Auto Mixed Precision", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
+{"page_content": "To run model on bfloat16, typically user can either explicitly convert the data and model to bfloat16, for example:\n\n```console\n# with explicit conversion\ninput = input.to(dtype=torch.bfloat16)\nmodel = model.to(dtype=torch.bfloat16)\n```\n\nor utilize torch.amp (Automatic Mixed Precision) package. The autocast instance serves as context managers or decorators that allow regions of your script to run in mixed precision, for example:\n\n```console\n# with AMP\nwith torch.autocast(device_type=\"cpu\", dtype=torch.bfloat16):\n output = model(input)\n```\n\nGenerally, the explicit conversion approach and AMP approach have similar performance. Even though, we recommend run bfloat16 models with AMP, because:\n\n- **Better user experience with automatic fallback**: If your script includes operators that don\u2019t have bfloat16 support, autocast will implicitly convert them back to float32 while the explicit converted model will give a runtime error.", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
+{"page_content": "- **Mixed data type for activation and parameters**: Unlike the explicit conversion which converts all the model parameters to bfloat16, AMP mode will run in mixed data type. To be specific, input/output will be kept in bfloat16 while parameters, e.g., weight/bias, will be kept in float32. The mixed data type of activation and parameters will help improve performance while maintaining the accuracy.\n\n## Performance Gains\n\nWe benchmarked inference performance of TorchVision models on Intel\u00ae Xeon\u00ae Platinum 8380H CPU @ 2.90GHz (codenamed Cooper Lake), single instance per socket (batch size = 2 x number of physical cores). Results show that bfloat16 has 1.4x to 2.2x performance gain over float32.\n\n\n
\n
\n\n## The performance boost of bfloat16 over float32 primarily comes from 3 aspects:", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
+{"page_content": "- The compute intensive operators take advantage of the new bfloat16 native instruction VDPBF16PS which doubles the hardware compute throughput.\n- Bfloat16 have only half the memory footprint of float32, so theoretically the memory bandwidth intensive operators will be twice faster.\n- On Channels Last, we intentionally keep the same parallelization scheme for all the memory format aware operators (can\u2019t do this on Channels First though), which increases the data locality when passing each layer\u2019s output to the next. Basically, it keeps the data closer to CPU cores while data would reside in cache anyway. And bfloat16 will have a higher cache hit rate compared with float32 in such scenarios due to smaller memory footprint.\n\n## Conclusion & Future Work", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
+{"page_content": "In this blog, we introduced recent software optimizations on bfloat16 introduced in PyTorch 1.12. Results on the 3rd Gen Intel\u00ae Xeon\u00ae Scalable processor show that bfloat16 has 1.4x to 2.2x performance gain over float32 on the TorchVision models. Further improvement is expected on the next generation of Intel\u00ae Xeon\u00ae Scalable Processors with AMX instruction support. Though the performance number for this blog is collected with TorchVision models, the benefit is broad across all topologies. And we will continue to extend the bfloat16 optimization effort to a broader scope in the future!\n\n## Acknowledgement\n\nThe results presented in this blog is a joint effort of Meta and Intel PyTorch team. Special thanks to Vitaly Fedyunin and Wei Wei from Meta who spent precious time and gave substantial assistance! Together we made one more step on the path of improving the PyTorch CPU eco system.\n\n## Reference", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
+{"page_content": "## Reference\n\n- [The bfloat16 numerical format](https://cloud.google.com/tpu/docs/bfloat16?hl=en)\n- [https://pytorch.org/docs/master/amp.html#torch.autocast](https://pytorch.org/docs/master/amp.html#torch.autocast)\n- [Intel and Facebook Accelerate PyTorch Performance with 3rd Gen Intel\u00ae Xeon\u00ae Processors and Intel\u00ae Deep Learning Boost\u2019s new BFloat16 capability](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Intel-and-Facebook-Accelerate-PyTorch-Performance-with-3rd-Gen/post/1335659)", "metadata": {"source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"Announcing PyTorch Conference 2022\"\nauthor:\nfeatured-img: \"/assets/images/pytorch-conference-2022.png\"\n---\n\nWe are excited to announce that the PyTorch Conference returns in-person as a satellite event to [NeurlPS](https://l.workplace.com/l.php?u=https%3A%2F%2Fnips.cc%2F&h=AT3cdRwSEhyuNXpH2ptWjk-KxMxcceaYeTfflT6PEezDQ_zeUxRv1gjX7GhTQBgvZxFAR0wlSBwuhpipdMjUknMnhY5oJ5C4HjLNO40-12UnoeYALriwrvdxGfgigo8KYlWu_gRIQwlO-2r0wTnNft0whoSaOdVAxw&__tn__=-UK-R&c[0]=AT3z6QRLu8Uw48lKQ_P6FFq7ncHfjsfI16OGZvWO9kALatCY4sZcMjNzR7a4OiOG25RKVHpDX0TGutZHyM_R8Kl2s71Y3DEbq5QccmUVaSzCbcMUSc5Ms2zXHoeGxUlw1XirihAydPsX4Y1OmF6GRjqH8YFTNTFQRN3I8j2SFhR8LEUDxDmfnZ8Q7c2hXi0HeGc) (Neural Information Processing Systems) in New Orleans on Dec. 2nd.\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/announcing-pytorch-conference-2022/", "category": "pytorch blogs"}}
+{"page_content": "We changed the name from PyTorch Developer Day to PyTorch Conference to signify the turning of a new chapter as we look to the future of PyTorch, encompassing the entire PyTorch Community. This conference will bring together leading researchers, academics and developers from the Machine Learning (ML) and Deep Learning (DL) communities to join a multiple set of talks and a poster session; covering new software releases on [PyTorch](https://pytorch.org/), use cases in academia and industry, as well as ML/DL development and production trends.\n\n### EVENT OVERVIEW\n\nWhen: Dec 2nd, 2022 (In-Person and Virtual)\n\nWhere: New Orleans, Louisiana (USA) | *Virtual option as well*\n\n### SCHEDULE\n\nAll times in Central Standard.\n\n8:00-9:00 am Registration/Check in\n\n9:00-11:20 am Keynote & Technical Talks\n\n11:30-1:00 pm Lunch\n\n1:00-3:00 pm Poster Session & Breakouts\n\n3:00-4:00 pm Community/Partner Talks\n\n4:00-5:00 pm Panel Discussion", "metadata": {"source": "https://pytorch.org/blog/announcing-pytorch-conference-2022/", "category": "pytorch blogs"}}
+{"page_content": "Agenda subject to change.\n\nAll talks will be livestreamed and available to the public. The in-person event will be by invitation only as space is limited. If you\u2019d like to apply to attend in person, please submit all requests [here](https://pytorchconference22.splashthat.com/).\n\n### LINKS\n\n- [Submit Content for Consideration by Sept. 30th](https://docs.google.com/forms/d/121ptOuhqhmcPev9g5Zt2Ffl-NtB_oeyFk5CWjumUVLQ/edit)\n- [Livestream event page](https://www.facebook.com/events/1562940847455759)\n- [Apply for an invitation to the in-person event](https://pytorchconference22.splashthat.com/)", "metadata": {"source": "https://pytorch.org/blog/announcing-pytorch-conference-2022/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'New PyTorch library releases including TorchVision Mobile, TorchAudio I/O, and more'\nauthor: Team PyTorch \n---\n\nToday, we are announcing updates to a number of PyTorch libraries, alongside the [PyTorch 1.8 release](https://pytorch.org/blog/pytorch-1.8-released). The updates include new releases for the domain libraries including TorchVision, TorchText and TorchAudio as well as new version of TorchCSPRNG. These releases include a number of new features and improvements and, along with the PyTorch 1.8 release, provide a broad set of updates for the PyTorch community to build on and leverage.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "Some highlights include:\n* **TorchVision** - Added support for PyTorch Mobile including [Detectron2Go](https://ai.facebook.com/blog/d2go-brings-detectron2-to-mobile) (D2Go), auto-augmentation of data during training, on the fly type conversion, and [AMP autocasting](https://pytorch.org/docs/stable/amp.html). \n* **TorchAudio** - Major improvements to I/O, including defaulting to sox_io backend and file-like object support. Added Kaldi Pitch feature and support for CMake based build allowing TorchAudio to better support no-Python environments.\n* **TorchText** - Updated the dataset loading API to be compatible with standard PyTorch data loading utilities.\n* **TorchCSPRNG** - Support for cryptographically secure pseudorandom number generators for PyTorch is now stable with new APIs for AES128 ECB/CTR and CUDA support on Windows.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "Please note that, starting in PyTorch 1.6, features are classified as Stable, Beta, and Prototype. Prototype features are not included as part of the binary distribution and are instead available through either building from source, using nightlies or via compiler flag. You can see the detailed announcement [here](https://pytorch.org/blog/pytorch-feature-classification-changes/).\n\n\n# TorchVision 0.9.0\n### [Stable] TorchVision Mobile: Operators, Android Binaries, and Tutorial\nWe are excited to announce the first on-device support and binaries for a PyTorch domain library. We have seen significant appetite in both research and industry for on-device vision support to allow low latency, privacy friendly, and resource efficient mobile vision experiences. You can follow this [new tutorial](https://github.com/pytorch/android-demo-app/tree/master/D2Go) to build your own Android object detection app using TorchVision operators, D2Go, or your own custom operators and model.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "\n

\n
\n\n### [Stable] New Mobile models for Classification, Object Detection and Semantic Segmentation\nWe have added support for the MobileNetV3 architecture and provided pre-trained weights for Classification, Object Detection and Segmentation. It is easy to get up and running with these models, just import and load them as you would any ```torchvision``` model:\n```python\nimport torch\nimport torchvision\n\n# Classification\nx = torch.rand(1, 3, 224, 224)\nm_classifier = torchvision.models.mobilenet_v3_large(pretrained=True)\nm_classifier.eval()\npredictions = m_classifier(x)\n\n# Quantized Classification\nx = torch.rand(1, 3, 224, 224)\nm_classifier = torchvision.models.quantization.mobilenet_v3_large(pretrained=True)\nm_classifier.eval()\npredictions = m_classifier(x)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "# Object Detection: Highly Accurate High Resolution Mobile Model\nx = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]\nm_detector = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(pretrained=True)\nm_detector.eval()\npredictions = m_detector(x)\n\n# Semantic Segmentation: Highly Accurate Mobile Model\nx = torch.rand(1, 3, 520, 520)\nm_segmenter = torchvision.models.segmentation.deeplabv3_mobilenet_v3_large(pretrained=True)\nm_segmenter.eval()\npredictions = m_segmenter(x)\n```\nThese models are highly competitive with TorchVision\u2019s existing models on resource efficiency, speed, and accuracy. See our [release notes](https://github.com/pytorch/vision/releases) for detailed performance metrics.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "### [Stable] AutoAugment\n[AutoAugment](https://arxiv.org/pdf/1805.09501.pdf) is a common Data Augmentation technique that can increase the accuracy of Scene Classification models. Though the data augmentation policies are directly linked to their trained dataset, empirical studies show that ImageNet policies provide significant improvements when applied to other datasets. We\u2019ve implemented 3 policies learned on the following datasets: ImageNet, CIFA10 and SVHN. These can be used standalone or mixed-and-matched with existing transforms:\n```python\nfrom torchvision import transforms\n\nt = transforms.AutoAugment()\ntransformed = t(image)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "transform=transforms.Compose([\n transforms.Resize(256),\n transforms.AutoAugment(),\n transforms.ToTensor()])\n```\n### Other New Features for TorchVision\n* [Stable] All read and decode methods in the io.image package now support:\n * Palette, Grayscale Alpha and RBG Alpha image types during PNG decoding\n * On-the-fly conversion of image from one type to the other during read\n* [Stable] WiderFace dataset\n* [Stable] Improved FasterRCNN speed and accuracy by introducing a score threshold on RPN\n* [Stable] Modulation input for DeformConv2D\n* [Stable] Option to write audio to a video file\n* [Stable] Utility to draw bounding boxes\n* [Beta] Autocast support in all Operators\nFind the full TorchVision release notes [here](https://github.com/pytorch/vision/releases).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "# TorchAudio 0.8.0\n### I/O Improvements\nWe have continued our work from the [previous release](https://github.com/pytorch/audio/releases/tag/v0.7.0) to improve TorchAudio\u2019s I/O support, including:\n* [Stable] Changing the default backend to \u201csox_io\u201d (for Linux/macOS), and updating the \u201csoundfile\u201d backend\u2019s interface to align with that of \u201csox_io\u201d. The legacy backend and interface are still accessible, though it is strongly discouraged to use them.\n* [Stable] File-like object support in both \"sox_io\" backend, \u201csoundfile\u201d backend and sox_effects.\n* [Stable] New options to change the format, encoding, and bits_per_sample when saving.\n* [Stable] Added GSM, HTK, AMB, AMR-NB and AMR-WB format support to the \u201csox_io\u201d backend.\n* [Beta] A new ```functional.apply_codec``` function which can degrade audio data by applying audio codecs supported by \u201csox_io\u201d backend in an in-memory fashion.\nHere are some examples of features landed in this release:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "```python\n# Load audio over HTTP\nwith requests.get(URL, stream=True) as response:\n waveform, sample_rate = torchaudio.load(response.raw)\n \n# Saving to Bytes buffer as 32-bit floating-point PCM\nbuffer_ = io.BytesIO()\ntorchaudio.save(\n buffer_, waveform, sample_rate,\n format=\"wav\", encoding=\"PCM_S\", bits_per_sample=16)\n \n# Apply effects while loading audio from S3\nclient = boto3.client('s3')\nresponse = client.get_object(Bucket=S3_BUCKET, Key=S3_KEY)\nwaveform, sample_rate = torchaudio.sox_effects.apply_effect_file(\n response['Body'],\n [[\"lowpass\", \"-1\", \"300\"], [\"rate\", \"8000\"]])\n \n# Apply GSM codec to Tensor\nencoded = torchaudio.functional.apply_codec(\n waveform, sample_rate, format=\"gsm\")\n```\n\nCheck out the revamped audio preprocessing tutorial, [Audio Manipulation with TorchAudio](https://pytorch.org/tutorials/beginner/audio_preprocessing_tutorial.html).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "### [Stable] Switch to CMake-based build\nIn the previous version of TorchAudio, it was utilizing CMake to build third party dependencies. Starting in 0.8.0, TorchaAudio uses CMake to build its C++ extension. This will open the door to integrate TorchAudio in non-Python environments (such as C++ applications and mobile). We will continue working on adding example applications and mobile integrations.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "### [Beta] Improved and New Audio Transforms\nWe have added two widely requested operators in this release: the SpectralCentroid transform and the Kaldi Pitch feature extraction (detailed in [\"A pitch extraction algorithm tuned for automatic speech recognition\"](https://ieeexplore.ieee.org/document/6854049)). We\u2019ve also exposed a normalization method to Mel transforms, and additional STFT arguments to Spectrogram. We would like to ask our community to continue to [raise feature requests](https://github.com/pytorch/audio/issues/new?assignees=&labels=&template=feature-request.md) for core audio processing features like these!", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "### Community Contributions\nWe had more contributions from the open source community in this release than ever before, including several completely new features. We would like to extend our sincere thanks to the community. Please check out the newly added [CONTRIBUTING.md](https://github.com/pytorch/audio/blob/master/CONTRIBUTING.md) for ways to contribute code, and remember that reporting bugs and requesting features are just as valuable. We will continue posting well-scoped work items as issues labeled \u201chelp-wanted\u201d and \u201ccontributions-welcome\u201d for anyone who would like to contribute code, and are happy to coach new contributors through the contribution process.\n\nFind the full TorchAudio release notes [here](https://github.com/pytorch/audio/releases).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "# TorchText 0.9.0\n### [Beta] Dataset API Updates\nIn this release, we are updating TorchText\u2019s dataset API to be compatible with PyTorch data utilities, such as DataLoader, and are deprecating TorchText\u2019s custom data abstractions such as ```Field```. The updated datasets are simple string-by-string iterators over the data. For guidance about migrating from the legacy abstractions to use modern PyTorch data utilities, please refer to our [migration guide](https://github.com/pytorch/text/blob/master/examples/legacy_tutorial/migration_tutorial.ipynb).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "The text datasets listed below have been updated as part of this work. For examples of how to use these datasets, please refer to our [end-to-end text classification tutorial](https://pytorch.org/tutorials/beginner/text_sentiment_ngrams_tutorial.html).\n* **Language modeling:** WikiText2, WikiText103, PennTreebank, EnWik9\n* **Text classification:** AG_NEWS, SogouNews, DBpedia, YelpReviewPolarity, YelpReviewFull, YahooAnswers, AmazonReviewPolarity, AmazonReviewFull, IMDB\n* **Sequence tagging:** UDPOS, CoNLL2000Chunking\n* **Translation:** IWSLT2016, IWSLT2017\n* **Question answer:** SQuAD1, SQuAD2\n\nFind the full TorchText release notes [here](https://github.com/pytorch/text/releases).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
+{"page_content": "# [Stable] TorchCSPRNG 0.2.0\nWe [released TorchCSPRNG in August 2020](https://pytorch.org/blog/torchcsprng-release-blog/), a PyTorch C++/CUDA extension that provides cryptographically secure pseudorandom number generators for PyTorch. Today, we are releasing the 0.2.0 version and designating the library as stable. This release includes a new API for encrypt/decrypt with AES128 ECB/CTR as well as CUDA 11 and Windows CUDA support.\n\nFind the full TorchCSPRNG release notes [here](https://github.com/pytorch/csprng/releases/).\n\n\n\n\n\n\n\nThanks for reading, and if you are excited about these updates and want to participate in the future of PyTorch, we encourage you to join the [discussion forums](https://discuss.pytorch.org/) and [open GitHub issues](https://github.com/pytorch).\n\nCheers!\n\n***Team PyTorch***", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}}
{"page_content": "---\nlayout: blog_detail\ntitle: 'An overview of the ML models introduced in TorchVision v0.9'\nauthor: Team PyTorch \n---\n\nTorchVision v0.9 has been [released](https://github.com/pytorch/vision/releases) and it is packed with numerous new Machine Learning models and features, speed improvements and bug fixes. In this blog post, we provide a quick overview of the newly introduced ML models and discuss their key features and characteristics.", "metadata": {"source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"}}
-{"page_content": "Classification", "metadata": {"source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"}}
-{"page_content": "* **MobileNetV3 Large & Small:** These two classification models are optimized for Mobile use-cases and are used as backbones on other Computer Vision tasks. The implementation of the new [MobileNetV3 architecture](https://github.com/pytorch/vision/blob/master/torchvision/models/mobilenetv3.py) supports the Large & Small variants and the depth multiplier parameter as described in the [original paper](https://arxiv.org/pdf/1905.02244.pdf). We offer pre-trained weights on ImageNet for both Large and Small", "metadata": {"source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"}}
-{"page_content": "networks with depth multiplier 1.0 and resolution 224x224. Our previous [training recipes](https://github.com/pytorch/vision/tree/master/references/classification#mobilenetv3-large--small) have been updated and can be used to easily train the models from scratch (shoutout to Ross Wightman for inspiring some of our [training configuration](https://rwightman.github.io/pytorch-image-models/training_hparam_examples/#mobilenetv3-large-100-75766-top-1-92542-top-5)). The Large variant offers a [competitive", "metadata": {"source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"}}
-{"page_content": "accuracy](https://github.com/pytorch/vision/blob/master/docs/source/models.rst#classification) comparing to ResNet50 while being over 6x faster on CPU, meaning that it is a good candidate for applications where speed is important. For applications where speed is critical, one can sacrifice further accuracy for speed and use the Small variant which is 15x faster than ResNet50.", "metadata": {"source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"}}
-{"page_content": "* **Quantized MobileNetV3 Large:** The quantized version of MobilNetV3 Large reduces the number of parameters by 45% and it is roughly 2.5x faster than the non-quantized version while remaining competitive in [terms of accuracy](https://github.com/pytorch/vision/blob/master/docs/source/models.rst#quantized-models). It was fitted on ImageNet using Quantization Aware Training by iterating on the non-quantized version and it can be trained from scratch using the existing [reference", "metadata": {"source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"}}
-{"page_content": "scripts](https://github.com/pytorch/vision/tree/master/references/classification#quantized).", "metadata": {"source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"}}
-{"page_content": "**Usage:**\n```\nmodel = torchvision.models.mobilenet_v3_large(pretrained=True)\n# model = torchvision.models.mobilenet_v3_small(pretrained=True)\n# model = torchvision.models.quantization.mobilenet_v3_large(pretrained=True)\nmodel.eval()\npredictions = model(img)\n```", "metadata": {"source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"}}
-{"page_content": "Object Detection", "metadata": {"source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"}}
-{"page_content": "* **Faster R-CNN MobileNetV3-Large FPN:** Combining the MobileNetV3 Large backbone with a Faster R-CNN detector and a Feature Pyramid Network leads to a highly accurate and fast object detector. The pre-trained weights are fitted on COCO 2017 using the provided reference [scripts](https://github.com/pytorch/vision/tree/master/references/detection#faster-r-cnn-mobilenetv3-large-fpn) and the model is 5x faster on CPU than the equivalent ResNet50 detector while remaining competitive in [terms of", "metadata": {"source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"}}
-{"page_content": "accuracy](https://github.com/pytorch/vision/blob/master/docs/source/models.rst#object-detection-instance-segmentation-and-person-keypoint-detection).", "metadata": {"source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"}}
-{"page_content": "* **Faster R-CNN MobileNetV3-Large 320 FPN:** This is an iteration of the previous model that uses reduced resolution (min_size=320 pixel) and sacrifices accuracy for speed. It is 25x faster on CPU than the equivalent ResNet50 detector and thus it is good for real mobile use-cases.", "metadata": {"source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"}}
-{"page_content": "**Usage:**\n```\nmodel = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(pretrained=True)\n# model = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_320_fpn(pretrained=True)\nmodel.eval()\npredictions = model(img)\n```", "metadata": {"source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"}}
-{"page_content": "Semantic Segmentation", "metadata": {"source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"}}
-{"page_content": "* **DeepLabV3 with Dilated MobileNetV3 Large Backbone:** A dilated version of the MobileNetV3 Large backbone combined with DeepLabV3 helps us build a highly accurate and fast semantic segmentation model. The pre-trained weights are fitted on COCO 2017 using our [standard training recipes](https://github.com/pytorch/vision/tree/master/references/segmentation#deeplabv3_mobilenet_v3_large). The final model has the [same", "metadata": {"source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"}}
-{"page_content": "accuracy](https://github.com/pytorch/vision/blob/master/docs/source/models.rst#semantic-segmentation) as the FCN ResNet50 but it is 8.5x faster on CPU and thus making it an excellent replacement for the majority of applications.", "metadata": {"source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"}}
-{"page_content": "* **Lite R-ASPP with Dilated MobileNetV3 Large Backbone:** We introduce the implementation of a new segmentation head called Lite R-ASPP and combine it with the dilated MobileNetV3 Large backbone to build a very fast segmentation model. The new model sacrifices some accuracy to achieve a 15x speed improvement comparing to the previously most lightweight segmentation model which was the FCN ResNet50.", "metadata": {"source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"}}
+{"page_content": "### Classification\n* **MobileNetV3 Large & Small:** These two classification models are optimized for Mobile use-cases and are used as backbones on other Computer Vision tasks. The implementation of the new [MobileNetV3 architecture](https://github.com/pytorch/vision/blob/master/torchvision/models/mobilenetv3.py) supports the Large & Small variants and the depth multiplier parameter as described in the [original paper](https://arxiv.org/pdf/1905.02244.pdf). We offer pre-trained weights on ImageNet for both Large and Small networks with depth multiplier 1.0 and resolution 224x224. Our previous [training recipes](https://github.com/pytorch/vision/tree/master/references/classification#mobilenetv3-large--small) have been updated and can be used to easily train the models from scratch (shoutout to Ross Wightman for inspiring some of our [training configuration](https://rwightman.github.io/pytorch-image-models/training_hparam_examples/#mobilenetv3-large-100-75766-top-1-92542-top-5)). The Large variant offers a [competitive accuracy](https://github.com/pytorch/vision/blob/master/docs/source/models.rst#classification) comparing to ResNet50 while being over 6x faster on CPU, meaning that it is a good candidate for applications where speed is important. For applications where speed is critical, one can sacrifice further accuracy for speed and use the Small variant which is 15x faster than ResNet50.", "metadata": {"source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"}}
+{"page_content": "* **Quantized MobileNetV3 Large:** The quantized version of MobilNetV3 Large reduces the number of parameters by 45% and it is roughly 2.5x faster than the non-quantized version while remaining competitive in [terms of accuracy](https://github.com/pytorch/vision/blob/master/docs/source/models.rst#quantized-models). It was fitted on ImageNet using Quantization Aware Training by iterating on the non-quantized version and it can be trained from scratch using the existing [reference scripts](https://github.com/pytorch/vision/tree/master/references/classification#quantized).", "metadata": {"source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"}}
+{"page_content": "**Usage:**\n```\nmodel = torchvision.models.mobilenet_v3_large(pretrained=True)\n# model = torchvision.models.mobilenet_v3_small(pretrained=True)\n# model = torchvision.models.quantization.mobilenet_v3_large(pretrained=True)\nmodel.eval()\npredictions = model(img)\n```\n### Object Detection\n* **Faster R-CNN MobileNetV3-Large FPN:** Combining the MobileNetV3 Large backbone with a Faster R-CNN detector and a Feature Pyramid Network leads to a highly accurate and fast object detector. The pre-trained weights are fitted on COCO 2017 using the provided reference [scripts](https://github.com/pytorch/vision/tree/master/references/detection#faster-r-cnn-mobilenetv3-large-fpn) and the model is 5x faster on CPU than the equivalent ResNet50 detector while remaining competitive in [terms of accuracy](https://github.com/pytorch/vision/blob/master/docs/source/models.rst#object-detection-instance-segmentation-and-person-keypoint-detection). \n* **Faster R-CNN MobileNetV3-Large 320 FPN:** This is an iteration of the previous model that uses reduced resolution (min_size=320 pixel) and sacrifices accuracy for speed. It is 25x faster on CPU than the equivalent ResNet50 detector and thus it is good for real mobile use-cases.", "metadata": {"source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"}}
+{"page_content": "**Usage:**\n```\nmodel = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(pretrained=True)\n# model = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_320_fpn(pretrained=True)\nmodel.eval()\npredictions = model(img)\n```\n### Semantic Segmentation\n* **DeepLabV3 with Dilated MobileNetV3 Large Backbone:** A dilated version of the MobileNetV3 Large backbone combined with DeepLabV3 helps us build a highly accurate and fast semantic segmentation model. The pre-trained weights are fitted on COCO 2017 using our [standard training recipes](https://github.com/pytorch/vision/tree/master/references/segmentation#deeplabv3_mobilenet_v3_large). The final model has the [same accuracy](https://github.com/pytorch/vision/blob/master/docs/source/models.rst#semantic-segmentation) as the FCN ResNet50 but it is 8.5x faster on CPU and thus making it an excellent replacement for the majority of applications.\n* **Lite R-ASPP with Dilated MobileNetV3 Large Backbone:** We introduce the implementation of a new segmentation head called Lite R-ASPP and combine it with the dilated MobileNetV3 Large backbone to build a very fast segmentation model. The new model sacrifices some accuracy to achieve a 15x speed improvement comparing to the previously most lightweight segmentation model which was the FCN ResNet50.", "metadata": {"source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"}}
{"page_content": "**Usage:**\n```\nmodel = torchvision.models.segmentation.deeplabv3_mobilenet_v3_large(pretrained=True)\n# model = torchvision.models.segmentation.lraspp_mobilenet_v3_large(pretrained=True)\nmodel.eval()\npredictions = model(img)\n```\nIn the near future we plan to publish an article that covers the details of how the above models were trained and discuss their tradeoffs and design choices. Until then we encourage you to try out the new models and provide your feedback.", "metadata": {"source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"}}
{"page_content": "---\nlayout: blog_detail\ntitle: 'Introducing nvFuser, a deep learning compiler for PyTorch'\nauthor: Christian Sarofeen, Piotr Bialecki, Jie Jiang, Kevin Stephano, Masaki Kozuki, Neal Vaidya, Stas Bekman\nfeatured-img: \"/assets/images/introducing-nvfuser-a-deep-learning-compiler-for-pytorch-1.png\"\n---", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "nvFuser is a Deep Learning Compiler for NVIDIA GPUs that automatically just-in-time compiles fast and flexible kernels to reliably accelerate users' networks. It provides significant speedups for deep learning networks running on Volta and later CUDA accelerators by generating fast custom \u201cfusion\u201d kernels at runtime. nvFuser is specifically designed to meet the unique requirements of the PyTorch community, and it supports diverse network architectures and programs with dynamic inputs of varying shapes and", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "strides.", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "In this blog post we\u2019ll describe nvFuser and how it\u2019s used today, show the significant performance improvements it can obtain on models from HuggingFace and TIMM, and look ahead to nvFuser in PyTorch 1.13 and beyond. If you would like to know more about how and why fusion improves the speed of training for Deep Learning networks, please see our previous talks on nvFuser from [GTC 2022](https://www.nvidia.com/en-us/on-demand/session/gtcspring22-s41958/) and [GTC", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "2021](https://www.nvidia.com/en-us/on-demand/session/gtcspring21-s31952/).", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "nvFuser relies on a graph representation of PyTorch operations to optimize and accelerate. Since PyTorch has an eager execution model, the PyTorch operations users are running are not directly accessible as a whole program that can be optimized by a system like nvFuser. Therefore users must utilize systems built on top of nvFuser which are capable of capturing users programs and translating them into a form that is optimizable by nvFuser. These higher level systems then pass these captured operations to", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "nvFuser, so that nvFuser can optimize the execution of the user\u2019s script for NVIDIA GPUs. There are three systems that capture, translate, and pass user programs to nvFuser for optimization:", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "- [TorchScript jit.script](https://pytorch.org/docs/stable/generated/torch.jit.script.html#torch.jit.script)\n - This system directly parses sections of an annotated python script to translate into its own representation what the user is doing. This system then applies its own version of auto differentiation to the graph, and passes sections of the subsequent forward and backwards graphs to nvFuser for optimization.", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "- [FuncTorch](https://pytorch.org/functorch/stable/generated/functorch.compile.memory_efficient_fusion.html#functorch.compile.memory_efficient_fusion)", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "- This system doesn\u2019t directly look at the user python script, instead inserting a mechanism that captures PyTorch operations as they\u2019re being run. We refer to this type of capture system as \u201ctrace program acquisition\u201d, since we\u2019re tracing what has been performed. FuncTorch doesn\u2019t perform its own auto differentiation \u2013 it simply traces PyTorch\u2019s autograd directly to get backward graphs.\n- [TorchDynamo](https://github.com/pytorch/torchdynamo)", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "- TorchDynamo is another program acquisition mechanism built on top of FuncTorch. TorchDynamo parses the Python bytecode produced from the user script in order to select portions to trace with FuncTorch. The benefit of TorchDynamo is that it\u2019s able to apply decorators to a user\u2019s script, effectively isolating what should be sent to FuncTorch, making it easier for FuncTorch to successfully trace complex Python scripts.", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "These systems are available for users to interact with directly while nvFuser automatically and seamlessly optimizes performance critical regions of the user\u2019s code. These systems automatically send parsed user programs to nvFuser so nvFuser can:", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "1. Analyze the operations being run on GPUs\n2. Plan parallelization and optimization strategies for those operations\n3. Apply those strategies in generated GPU code\n4. Runtime-compile the generated optimized GPU functions\n5. Execute those CUDA kernels on subsequent iterations", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "It is important to note nvFuser does not yet support all PyTorch operations, and there are still some scenarios that are actively being improved in nvFuser that are discussed herein. However, nvFuser does support many DL performance critical operations today, and the number of supported operations will grow in subsequent PyTorch releases. nvFuser is capable of generating highly specialized and optimized GPU functions for the operations it does have support for. This means nvFuser is able to power new", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch systems like TorchDynamo and FuncTorch to combine the flexibility PyTorch is known for with unbeatable performance.", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "nvFuser Performance", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Before getting into how to use nvFuser, in this section we\u2019ll show the improvements in training speed nvFuser provides for a variety of models from the [HuggingFace Transformers](https://github.com/huggingface/transformers) and [PyTorch Image Models (TIMM)](https://github.com/rwightman/pytorch-image-models) repositories and we will discuss current gaps in nvFuser performance that are under development today. All performance numbers in this section were taken using an NVIDIA A100 40GB GPU, and used either", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "FuncTorch alone or Functorch with TorchDynamo.", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "HuggingFace Transformer Benchmarks\n\nnvFuser can dramatically accelerate training of HuggingFace Transformers when combined with another important optimization (more on that in a moment). Performance improvements can be seen in Figure 1 to range between 1.12x and 1.50x across a subset of popular HuggingFace Transformer networks.\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Figure 1: Performance gains of 8 training scenarios from HuggingFace\u2019s Transformer repository. First performance boost in the dark green is due to replacing the optimizer with an NVIDIA Apex fused AdamW optimizer. The light green is due to adding nvFuser. Models were run with batch size and sequence lengths of [64, 128], [8, 512], [2, 1024], [64, 128], [8, 512], [8, src_seql=512, tgt_seql=128], [8, src_seql=1024, tgt_seql=128], and [8, 512] respectively. All networks were run with Automatic Mixed Precision", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "(AMP) enabled with dtype=float16.", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "
", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "While these speedups are significant, it\u2019s important to understand that nvFuser doesn\u2019t (yet) automate everything about running networks quickly. For HuggingFace Transformers, for example, it was important to use the AdamW fused optimizer from [NVIDIA\u2019s Apex repository](https://github.com/NVIDIA/apex) as the optimizer otherwise consumed a large portion of runtime. Using the fused AdamW optimizer to make the network faster exposes the next major performance bottleneck \u2014 memory bound operations. These", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "operations are optimized by nvFuser, providing another large performance boost. With the fused optimizer and nvFuser enabled, the training speed of these networks improved between 1.12x to 1.5x.", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "HuggingFace Transformer models were run with [the torch.amp module](https://pytorch.org/docs/stable/amp.html). (\u201camp\u201d stands for Automated Mixed Precision, see the [\u201cWhat Every User Should Know about Mixed Precision in PyTorch\u201d](https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/) blog post for details.) An option to use nvFuser was added to HuggingFace\u2019sTrainer. If you have [TorchDynamo installed](https://github.com/pytorch/torchdynamo#requirements-and-setup) you", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "can activate it to enable nvFuser in HuggingFace by passing *torchdynamo = \u2018nvfuser\u2019* to the Trainer class.", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "nvFuser has great support for normalization kernels and related fusions frequently found in Natural Language Processing (NLP) models, and it is recommended users try nvFuser in their NLP workloads.", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch Image Models (TIMM) Benchmarks", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "nvFuser, can also significantly reduce the training time of TIMM networks, up to over 1.3x vs. eager PyTorch, and up to 1.44x vs. eager PyTorch when combined with the torch.amp module. Figure 1 shows nvFuser\u2019s speedup without torch.amp, and when torch.amp is used with the NHWC (\u201cchannels last\u201d) and NCHW (\u201cchannels first\u201d) formats. nvFuser is integrated in TIMM through FuncTorch tracing directly (without TorchDynamo) and can be used by adding the [--aot-autograd command line", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "argument](https://github.com/rwightman/pytorch-image-models/commit/ca991c1fa57373286b9876aa63370fd19f5d6032) when running the TIMM benchmark or training script.", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Figure 1: The Y-axis is the performance gain nvFuser provides over not using nvFuser. A value of 1.0 means no change in perf, 2.0 would mean nvFuser is twice as fast, 0.5 would mean nvFuser takes twice the time to run. Square markers are with float16 Automatic Mixed Precision (AMP) and channels first contiguous inputs, circle markers are float32 inputs, and triangles are with float16 AMP and channels last contiguous inputs. Missing data points are due to an error being encountered when tracing.\n
", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "When running with float32 precision nvFuser provides a 1.12x geometric mean (\u201cgeomean\u201d) speedup on TIMM networks, and when running with torch.amp and \u201cchannels first\u201d it provides a 1.14x geomean speedup. However, nvFuser currently doesn\u2019t speedup torch.amp and \u201cchannels last\u201d training (a .9x geomean regression), so we recommend not using it in those cases. We are actively working on improving \u201cchannels last\u201d performance now, and soon we will have two additional optimization strategies (grid persistent", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "optimizations for channels-last normalizations and fast transposes) which we expect will provide speedups comparable to \u201cchannels first\u201d in PyTorch version 1.13 and later. Many of nvFuser\u2019s optimizations can also help in inference cases. However, in PyTorch when running inference on small batch sizes, the performance is typically limited by CPU overhead, which nvFuser can\u2019t completely remove or fix. Therefore, typically the most important optimization for inference is to enable [CUDA", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Graphs](https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/) when possible. Once CUDA Graphs is enabled, then it can also be beneficial to also enable fusion through nvFuser. Performance of inference is shown in Figure 2 and Figure 3. Inference is only run with float16 AMP as it is uncommon to run inference workloads in full float32 precision.", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Figure 2: Performance gains of enabling CUDA Graphs, and CUDA Graphs with nvFuser compared to the performance of native PyTorch without CUDA Graphs and nvFuser across TIMM models with float16 AMP, channels first inputs, and a batch size of 1 and 8 respectively. There is a geomean speedup of 2.74x with CUDA Graphs and 2.71x with CUDA Graphs + nvFuser respectively. nvFuser provides a maximum regression of 0.68x and a maximum performance gain of 2.74x (relative to CUDA Graphs without nvFuser).", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Performance gain is measured relative to the average time per iteration PyTorch takes without CUDA Graphs and without nvFuser. Models are sorted by how much additional performance nvFuser is providing.", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "
", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Figure 3: Performance gains of enabling CUDA Graphs, and CUDA Graphs with nvFuser compared to the performance of native PyTorch without CUDA Graphs and nvFuser across TIMM models with AMP, channels last inputs, and a batch size of 1 and 8 respectively. There is a geomean speedup of 2.29x with CUDA Graphs and 2.95x with CUDA Graphs + nvFuser respectively. nvFuser provides a maximum regression of 0.86x and a maximum performance gain of 3.82x (relative to CUDA Graphs without nvFuser). Performance gain", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "is measured relative to the average time per iteration PyTorch takes without CUDA Graphs and without nvFuser. Models are sorted by how much additional performance nvFuser is providing.", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "
", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "So far nvFuser performance has not been tuned for inference workloads so its performance benefit is not consistent across all cases. However, there are still many models that benefit significantly from nvFuser during inference and we encourage users to try nvFuser in inference workloads to see if you would benefit today. Performance of nvFuser in inference workloads will improve in the future and if you\u2019re interested in nvFuser in inference workloads please reach out to us on the PyTorch forums.", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Getting Started - Accelerate Your Scripts with nvFuser", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "We\u2019ve created [a tutorial](https://pytorch.org/tutorials/intermediate/nvfuser_intro_tutorial.html) demonstrating how to take advantage of nvFuser to accelerate part of a standard transformer block, and how nvFuser can be used to define fast and novel operations. There are still some rough edges in nvFuser that we\u2019re working hard on improving as we\u2019ve outlined in this blog post. However we\u2019ve also demonstrated some great improvements for training speed on multiple networks in HuggingFace and TIMM and we", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "expect there are opportunities in your networks where nvFuser can help today, and many more opportunities it will help in the future.", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "If you would like to learn more about nvFuser we recommend watching our presentations from NVIDIA\u2019s GTC conference [GTC 2022](https://www.nvidia.com/en-us/on-demand/session/gtcspring22-s41958/) and [GTC 2021](https://www.nvidia.com/en-us/on-demand/session/gtcspring21-s31952/).", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'Introducing PyTorch Profiler - the new and improved performance tool'\nauthor: Maxim Lukiyanov - Principal PM at Microsoft, Guoliang Hua - Principal Engineering Manager at Microsoft, Geeta Chauhan - Partner Engineering Lead at Facebook, Gisle Dankel - Tech Lead at Facebook\n---", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
-{"page_content": "Along with [PyTorch 1.8.1 release](https://github.com/pytorch/pytorch/releases/tag/v1.8.1), we are excited to announce PyTorch Profiler \u2013 the new and improved performance debugging profiler for PyTorch. Developed as part of a collaboration between Microsoft and Facebook, the PyTorch Profiler is an open-source tool that enables accurate and efficient performance analysis and troubleshooting for large-scale deep learning models.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
-{"page_content": "Analyzing and improving large-scale deep learning model performance is an ongoing challenge that grows in importance as the model sizes increase. For a long time, PyTorch users had a hard time solving this challenge due to the lack of available tools. There were standard performance debugging tools that provide GPU hardware level information but missed PyTorch-specific context of operations. In order to recover missed information, users needed to combine multiple tools together or manually add minimum", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
-{"page_content": "correlation information to make sense of the data. There was also the autograd profiler (```torch.autograd.profiler```) which can capture information about PyTorch operations but does not capture detailed GPU hardware-level information and cannot provide support for visualization.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
-{"page_content": "The new PyTorch Profiler (```torch.profiler```) is a tool that brings both types of information together and then builds experience that realizes the full potential of that information. This new profiler collects both GPU hardware and PyTorch related information, correlates them, performs automatic detection of bottlenecks in the model, and generates recommendations on how to resolve these bottlenecks. All of this information from the profiler is visualized for the user in TensorBoard. The new Profiler API", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
-{"page_content": "is natively supported in PyTorch and delivers the simplest experience available to date where users can profile their models without installing any additional packages and see results immediately in TensorBoard with the new PyTorch Profiler plugin. Below is the screenshot of PyTorch Profiler - automatic bottleneck detection.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
-{"page_content": "Getting started\n\nPyTorch Profiler is the next version of the PyTorch autograd profiler. It has a new module namespace ```torch.profiler``` but maintains compatibility with autograd profiler APIs. The Profiler uses a new GPU profiling engine, built using Nvidia CUPTI APIs, and is able to capture GPU kernel events with high fidelity. To profile your model training loop, wrap the code in the profiler context manager as shown below.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
-{"page_content": "```python\n with torch.profiler.profile(\n schedule=torch.profiler.schedule(\n wait=2,\n warmup=2,\n active=6,\n repeat=1),\n on_trace_ready=tensorboard_trace_handler,\n with_stack=True\n) as profiler:\n for step, data in enumerate(trainloader, 0):\n print(\"step:{}\".format(step))\n inputs, labels = data[0].to(device=device), data[1].to(device=device)\n\n outputs = model(inputs)\n loss = criterion(outputs, labels)", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
-{"page_content": "optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n profiler.step()\n```\nThe ```schedule``` parameter allows you to limit the number of training steps included in the profile to reduce the amount of data collected and simplify visual analysis by focusing on what\u2019s important. The ```tensorboard_trace_handler``` automatically saves profiling results to disk for analysis in TensorBoard.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
-{"page_content": "To view results of the profiling session in TensorBoard, install PyTorch Profiler TensorBoard Plugin package.\n\n```python\npip install torch_tb_profiler\n```", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
-{"page_content": "Visual Studio Code Integration", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
-{"page_content": "[Microsoft Visual Studio Code](https://code.visualstudio.com/) is one of the most popular code editors for Python developers and data scientists. The [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for VS Code recently added the integration of TensorBoard into the code editor, including support for the PyTorch Profiler. Once you have VS Code and the Python extension installed, you can quickly open the TensorBoard Profiler plugin by launching the Command Palette", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
-{"page_content": "using the keyboard shortcut CTRL + SHIFT + P (CMD + SHIFT + P on a Mac) and typing the \u201cLaunch TensorBoard\u201d command.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
-{"page_content": "This integration comes with a built-in lifecycle management feature. VS Code will install the TensorBoard package and the PyTorch Profiler plugin package (coming in mid-April) automatically if you don\u2019t have them on your system. VS Code will also launch TensorBoard process for you and automatically look for any TensorBoard log files within your current directory. When you\u2019re done, just close the tab and VS Code will automatically close the process. No more Terminal windows running on your system to", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
-{"page_content": "provide a backend for the TensorBoard UI! Below is PyTorch Profiler Trace View running in TensorBoard.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
\n\nLearn more about TensorBoard support in VS Code in [this blog](https://devblogs.microsoft.com/python/python-in-visual-studio-code-february-2021-release/).", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
-{"page_content": "Feedback\n\nReview [PyTorch Profiler documentation](https://pytorch.org/docs/stable/profiler.html), give Profiler a try and let us know about your experience. Provide your feedback on [PyTorch Discussion Forum](https://discuss.pytorch.org/) or file issues on [PyTorch GitHub](https://github.com/pytorch/pytorch).", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"How Disney Improved Activity Recognition Through Multimodal Approaches with PyTorch\"\nauthor: Monica Alfaro, Albert Aparicio, Francesc Guitart, Marc Junyent, Pablo Pernias, Marcel Porta, and Miquel \u00c0ngel Farr\u00e9 (former Senior Technology Manager)\nfeatured-img: 'assets/images/disney_media_logo.jpg'\n---\n\n# Introduction", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Among the many things Disney Media & Entertainment Distribution (DMED) is responsible for, is the management and distribution of a huge array of media assets including news, sports, entertainment and features, episodic programs, marketing and advertising and more.\n\n\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Our team focuses on media annotation as part of DMED Technology\u2019s content platforms group. In our day-to-day work, we automatically analyze a variety of content that constantly challenges the efficiency of our machine learning workflow and the accuracy of our models.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Several of our colleagues recently discussed the workflow efficiencies that we achieved by switching to an end-to-end video analysis pipeline using PyTorch, as well as how we approach animated character recognition. We invite you to read more about both in this previous post.\n\nWhile the conversion to an end-to-end PyTorch pipeline is a solution that any company might benefit from, animated character recognition was a uniquely-Disney concept and solution.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "In this article we will focus on activity recognition, which is a general challenge across industries \u2014 but with some specific opportunities when leveraged in the media production field, because we can combine audio, video, and subtitles to provide a solution.\n\n# Experimenting with Multimodality", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Working on a multimodal problem adds more complexity to the usual training pipelines. Having multiple information modes for each example means that the multimodal pipeline has to have specific implementations to process each mode in the dataset. Usually after this processing step, the pipeline has to merge or fuse the outputs.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Our initial experiments in multimodality were completed using the [MMF framework](https://github.com/facebookresearch/mmf). MMF is a modular framework for vision and language multimodal research. MMF contains reference implementations of state-of-the-art vision and language models and has also powered multiple research projects at Meta AI Research (as seen in this [poster](https://assets.pytorch.org/pted2021/posters/A3.png) presented in PyTorch Ecosystem Day 2020). Along with the recent release of", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "TorchMultimodal, a PyTorch library for training state-of-the-art multimodal models at scale, MMF highlights the growing interest in Multimodal understanding.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "MMF tackles this complexity with modular management of all the elements of the pipeline through a wide set of different implementations for specific modules, ranging from the processing of the modalities to the fusion of the processed information.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "In our scenario, MMF was a great entry point to experiment with multimodality. It allowed us to iterate quickly by combining audio, video and closed captioning and experiment at different levels of scale with certain multimodal models, shifting from a single GPU to TPU Pods.\n\n# Multimodal Transformers", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "With a workbench based on MMF, our initial model was based on a concatenation of features from each modality evolving to a pipeline that included a Transformer-based fusion module to combine the different input modes.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Specifically, we made use of the fusion module called MMFTransformer, developed in collaboration with the Meta AI Research team. This is an implementation based on [VisualBERT](https://arxiv.org/abs/1908.03557) for which the necessary modifications were added to be able to work with text, audio and video.\n\nDespite having decent results with the out-of-box implementation MMFTransformer, we were still far from our goal, and the Transformers-based models required more data than we had available.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "# Searching for less data-hungry solutions\n\nSearching for less data-hungry solutions, our team started studying [MLP-Mixer](https://arxiv.org/abs/2105.01601). This new architecture has been proposed by the Google Brain team and it provides an alternative to well established de facto architectures like convolutions or self-attention for computer vision tasks.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "MLP-Mixer\n\nThe core idea behind mixed variations consists of replacing the convolutions or self-attention mechanisms used in transformers with Multilayer Perceptrons. This change in architecture favors the performance of the model in high data regimes (especially with respect to the Transformers), while also opening some questions regarding the inductive biases hidden in the convolutions and the self-attention layers.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Those proposals perform great in solving image classification tasks by splitting the image in chunks, flattening those chunks into 1D vectors and passing them through a sequence of Mixer Layers.\n\n\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Inspired by the advantages of Mixer based architectures, our team searched for parallelisms with the type of problems we try to solve in video classification: specifically, instead of a single image, we have a set of frames that need to be classified, along with audio and closed captioning in the shape of new modalities.\n\n# Activity Recognition reinterpreting the MLP-Mixer", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Our proposal takes the core idea of the [MLP-Mixer](https://arxiv.org/abs/2105.01601) \u2014 using multiple multi-layer perceptrons on a sequence and transposed sequence and extends it into a Multi Modal framework that allows us to process video, audio & text with the same architecture.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "For each of the modalities, we use different extractors that will provide embeddings describing the content. Given the embeddings of each modality, the MLP-Mixer architecture solves the problem of deciding which of the modalities might be the most important, while also weighing how much each modality contributes to the final labeling.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "For example, when it comes to detecting laughs, sometimes the key information is in audio or in the frames, and in some of the cases we have a strong signal in the closed caption.\n\nWe tried processing each frame separately with a ResNet34 and getting a sequence of embeddings and by using a video-specific model called R3D, both pre-trained on ImageNet and Kinetics400 respectively.\n\n\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "To process the audio, we use the pretrained ResNet34, and we remove the final layers to be able to extract 2D embeddings from the audio spectrograms (for 224x224 images we end up with 7x7 embeddings).\n\n\n\n\n
\n
\n\n\n\nFor closed captioning, we are using a pre-trained BERT-large, with all layers frozen, except for the Embeddings & LayerNorms.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n\n\nOnce we have extracted the embedding from each modality, we concatenate them into a single sequence and pass it through a set of MLP-Mixer blocks; next we use average pooling & a classification head to get predictions.\n\n\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Our experiments have been performed on a custom, manually labeled dataset for activity recognition with 15 classes, which we know from experiments are hard and cannot all be predicted accurately using a single modality.\n\nThese experiments have shown a significant increase in performance using our approach, especially in a low/mid-data regime (75K training samples).", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "When it comes to using only Text and Audio, our experiments showed a 15 percent improvement in accuracy over using a classifier on top of the features extracted by state-of-the-art backbones.\n\nUsing Text, Audio and Video we have seen a 17 percent improvement in accuracy over using Meta AIFacebook\u2019s MMF Framework, which uses a VisualBERT-like model to combine modalities using more powerful state of the art backbones.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Currently, we extended the initial model to cover up to 55 activity classes and 45 event classes. One of the challenges we expect to improve upon in the future is to include all activities and events, even those that are less frequent.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Interpreting the MLP-Mixer mode combinations \n\nAn MLP-Mixer is a concatenation of MultiLayer Perceptrons. This can be, very roughly, approximated to a linear operation, in the sense that, once trained, the weights are fixed and the input will directly affect the output.\n\nOnce we assume that approximation, we also assume that for an input consisting of NxM numbers, we could find a NxM matrix that (when multiplied elementwise) could approximate the predictions of the MLP-Mixer for a class.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n\n\nWe will call this matrix a stencil, and if we have access to it, we can find what parts of the input embeddings are responsible for a specific prediction.\n\nYou can think of it as a punch card with holes in specific positions. Only information in those positions will pass and contribute to a specific prediction. So we can measure the intensity of the input at those positions.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n\n\nOf course, this is an oversimplification, and there won't exist a unique stencil that perfectly represents all of the contributions of the input to a class (otherwise that would mean that the problem could be solved linearly). So this should be used for visualization purposes only, not as an accurate predictor.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Once we have a set of stencils for each class, we can effortlessly measure input contribution without relying on any external visualization techniques.\n\nTo find a stencil, we can start from a \"random noise\" stencil and optimize it to maximize the activations for a specific class by just back-propagating through the MLP-Mixer.\n\n\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "By doing this we can end up with many valid stencils, and we can reduce them to a few by using K-means to cluster them into similar stencils and averaging each cluster.\n\n# Using the Mixer to get the best of each world\n\nMLP-Mixer, used as an image classification model without convolutional layers, requires a lot of data, since the lack of inductive bias \u2013 one of the model's good points overall \u2013 is a weakness when it comes to working in low data domains.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "When used as a way to combine information previously extracted by large pretrained backbones (as opposed to being used as a full end-to-end solution), they shine. The Mixer\u2019s strength lies in finding temporal or structural coherence between different inputs. For example, in video-related tasks we could extract embeddings from the frames using a powerful, pretrained model that understands what is going on at frame level and use the mixer to make sense of it in a sequential manner.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "This way of using the Mixer allows us to work with limited amounts of data and still get better results than what was achieved with Transformers. This is because Mixers seem to be more stable during training and seem to pay attention to all the inputs, while Transformers tend to collapse and pay attention only to some modalities/parts of the sequence.\n\nAcknowledgements: We would like to thank the Meta AI Research and Partner Engineering teams for this collaboration.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'Practical Quantization in PyTorch'\nauthor: Suraj Subramanian, Mark Saroufim, Jerry Zhang\nfeatured-img: ''\n---", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "Quantization is a cheap and easy way to make your DNN run faster and with lower memory requirements. PyTorch offers a few different approaches to quantize your model. In this blog post, we'll lay a (quick) foundation of quantization in deep learning, and then take a look at how each technique looks like in practice. Finally we'll end with recommendations from the literature for using quantization in your workflows.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n Fig 1. PyTorch <3 Quantization\n
\n\n**Contents**\n* TOC\n{:toc}", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "Fundamentals of Quantization\n\n> If someone asks you what time it is, you don't respond \"10:14:34:430705\", but you might say \"a quarter past 10\".\n\nQuantization has roots in information compression; in deep networks it refers to reducing the numerical precision of its weights and/or activations.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "Overparameterized DNNs have more degrees of freedom and this makes them good candidates for information compression [[1]]. When you quantize a model, two things generally happen - the model gets smaller and runs with better efficiency. Hardware vendors explicitly allow for faster processing of 8-bit data (than 32-bit data) resulting in higher throughput. A smaller model has lower memory footprint and power consumption [[2]], crucial for deployment at the edge.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "Mapping function\nThe mapping function is what you might guess - a function that maps values from floating-point to integer space. A commonly used mapping function is a linear transformation given by
, where
is the input and
are **quantization parameters**.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "To reconvert to floating point space, the inverse function is given by
. \n\n
, and their difference constitutes the *quantization error*.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "Quantization Parameters\nThe mapping function is parameterized by the **scaling factor**
and **zero-point**
. \n\n
is simply the ratio of the input range to the output range \n
", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "where [
] is the clipping range of the input, i.e. the boundaries of permissible inputs. [
] is the range in quantized output space that it is mapped to. For 8-bit quantization, the output range
.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "
acts as a bias to ensure that a 0 in the input space maps perfectly to a 0 in the quantized space.
", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "Calibration", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "The process of choosing the input clipping range is known as **calibration**. The simplest technique (also the default in PyTorch) is to record the running mininmum and maximum values and assign them to
and
. [TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/pytorch-quantization-toolkit/docs/calib.html) also uses entropy minimization (KL divergence), mean-square-error minimization, or", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "percentiles of the input range.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "In PyTorch, `Observer` modules ([docs](https://PyTorch.org/docs/stable/torch.quantization.html?highlight=observer#observers), [code](https://github.com/PyTorch/PyTorch/blob/748d9d24940cd17938df963456c90fa1a13f3932/torch/ao/quantization/observer.py#L88)) collect statistics on the input values and calculate the qparams
. Different calibration schemes result in different quantized outputs, and it's best to empirically verify which scheme works best for your", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "application and architecture (more on that later).", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "```python\nfrom torch.quantization.observer import MinMaxObserver, MovingAverageMinMaxObserver, HistogramObserver\nC, L = 3, 4\nnormal = torch.distributions.normal.Normal(0,1)\ninputs = [normal.sample((C, L)), normal.sample((C, L))]\nprint(inputs)\n\n# >>>>>\n# [tensor([[-0.0590, 1.1674, 0.7119, -1.1270],\n# [-1.3974, 0.5077, -0.5601, 0.0683],\n# [-0.0929, 0.9473, 0.7159, -0.4574]]]),", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "# tensor([[-0.0236, -0.7599, 1.0290, 0.8914],\n# [-1.1727, -1.2556, -0.2271, 0.9568],\n# [-0.2500, 1.4579, 1.4707, 0.4043]])]\n\nobservers = [MinMaxObserver(), MovingAverageMinMaxObserver(), HistogramObserver()]\nfor obs in observers:\n for x in inputs: obs(x) \n print(obs.__class__.__name__, obs.calculate_qparams())", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "# >>>>>\n# MinMaxObserver (tensor([0.0112]), tensor([124], dtype=torch.int32))\n# MovingAverageMinMaxObserver (tensor([0.0101]), tensor([139], dtype=torch.int32))\n# HistogramObserver (tensor([0.0100]), tensor([106], dtype=torch.int32))\n```", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "Affine and Symmetric Quantization Schemes\n**Affine or asymmetric quantization** schemes assign the input range to the min and max observed values. Affine schemes generally offer tighter clipping ranges and are useful for quantizing non-negative activations (you don't need the input range to contain negative values if your input tensors are never negative). The range is calculated as", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "
. Affine quantization leads to more computationally expensive inference when used for weight tensors [[3]].", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "**Symmetric quantization** schemes center the input range around 0, eliminating the need to calculate a zero-point offset. The range is calculated as \n
. For skewed signals (like non-negative activations) this can result in bad quantization resolution because the clipping range includes values that never show up in the input (see the pyplot below).", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "```python\nact = torch.distributions.pareto.Pareto(1, 10).sample((1,1024))\nweights = torch.distributions.normal.Normal(0, 0.12).sample((3, 64, 7, 7)).flatten()\n\ndef get_symmetric_range(x):\n beta = torch.max(x.max(), x.min().abs())\n return -beta.item(), beta.item()\n\ndef get_affine_range(x):\n return x.min().item(), x.max().item()", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "def plot(plt, data, scheme):\n boundaries = get_affine_range(data) if scheme == 'affine' else get_symmetric_range(data)\n a, _, _ = plt.hist(data, density=True, bins=100)\n ymin, ymax = np.quantile(a[a>0], [0.25, 0.95])\n plt.vlines(x=boundaries, ls='--', colors='purple', ymin=ymin, ymax=ymax)\n\nfig, axs = plt.subplots(2,2)\nplot(axs[0, 0], act, 'affine')\naxs[0, 0].set_title(\"Activation, Affine-Quantized\")\n\nplot(axs[0, 1], act, 'symmetric')\naxs[0, 1].set_title(\"Activation, Symmetric-Quantized\")", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "plot(axs[1, 0], weights, 'affine')\naxs[1, 0].set_title(\"Weights, Affine-Quantized\")\n\nplot(axs[1, 1], weights, 'symmetric')\naxs[1, 1].set_title(\"Weights, Symmetric-Quantized\")\nplt.show()", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
Fig 2. Clipping ranges (in purple) for affine and symmetric schemes\n
\n\n\nIn PyTorch, you can specify affine or symmetric schemes while initializing the Observer. Note that not all observers support both schemes.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "```python\nfor qscheme in [torch.per_tensor_affine, torch.per_tensor_symmetric]:\n obs = MovingAverageMinMaxObserver(qscheme=qscheme)\n for x in inputs: obs(x)\n print(f\"Qscheme: {qscheme} | {obs.calculate_qparams()}\")\n\n# >>>>>\n# Qscheme: torch.per_tensor_affine | (tensor([0.0101]), tensor([139], dtype=torch.int32))\n# Qscheme: torch.per_tensor_symmetric | (tensor([0.0109]), tensor([128]))\n```", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "Per-Tensor and Per-Channel Quantization Schemes\nQuantization parameters can be calculated for the layer's entire weight tensor as a whole, or separately for each channel. In per-tensor, the same clipping range is applied to all the channels in a layer\n\n\n
\n
Fig 3. Per-Channel uses one set of qparams for each channel. Per-tensor uses the same qparams for the entire tensor.\n
", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "For weights quantization, symmetric-per-channel quantization provides better accuracies; per-tensor quantization performs poorly, possibly due to high variance in conv weights across channels from batchnorm folding [[3]].\n\n```python\nfrom torch.quantization.observer import MovingAveragePerChannelMinMaxObserver\nobs = MovingAveragePerChannelMinMaxObserver(ch_axis=0) # calculate qparams for all `C` channels separately\nfor x in inputs: obs(x)\nprint(obs.calculate_qparams())", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "# >>>>>\n# (tensor([0.0090, 0.0075, 0.0055]), tensor([125, 187, 82], dtype=torch.int32))\n```", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "Backend Engine\nCurrently, quantized operators run on x86 machines via the [FBGEMM backend](https://github.com/pytorch/FBGEMM), or use [QNNPACK](https://github.com/pytorch/QNNPACK) primitives on ARM machines. Backend support for server GPUs (via TensorRT and cuDNN) is coming soon. Learn more about extending quantization to custom backends: [RFC-0019](https://github.com/pytorch/rfcs/blob/master/RFC-0019-Extending-PyTorch-Quantization-to-Custom-Backends.md).", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "```python\nbackend = 'fbgemm' if x86 else 'qnnpack'\nqconfig = torch.quantization.get_default_qconfig(backend) \ntorch.backends.quantized.engine = backend", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "QConfig\n\nThe `QConfig` ([code](https://github.com/PyTorch/PyTorch/blob/d6b15bfcbdaff8eb73fa750ee47cef4ccee1cd92/torch/ao/quantization/qconfig.py#L165), [docs](https://pytorch.org/docs/stable/torch.quantization.html?highlight=qconfig#torch.quantization.QConfig)) NamedTuple stores the Observers and the quantization schemes used to quantize activations and weights.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "Be sure to pass the Observer class (not the instance), or a callable that can return Observer instances. Use `with_args()` to override the default arguments.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "```python\nmy_qconfig = torch.quantization.QConfig(\n activation=MovingAverageMinMaxObserver.with_args(qscheme=torch.per_tensor_affine),\n weight=MovingAveragePerChannelMinMaxObserver.with_args(qscheme=torch.qint8)\n)\n# >>>>>\n# QConfig(activation=functools.partial(, qscheme=torch.per_tensor_affine){}, weight=functools.partial(, qscheme=torch.qint8){})", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "In PyTorch\n\nPyTorch allows you a few different ways to quantize your model depending on\n- if you prefer a flexible but manual, or a restricted automagic process (*Eager Mode* v/s *FX Graph Mode*)\n- if qparams for quantizing activations (layer outputs) are precomputed for all inputs, or calculated afresh with each input (*static* v/s *dynamic*),\n- if qparams are computed with or without retraining (*quantization-aware training* v/s *post-training quantization*)", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "FX Graph Mode automatically fuses eligible modules, inserts Quant/DeQuant stubs, calibrates the model and returns a quantized module - all in two method calls - but only for networks that are [symbolic traceable](https://PyTorch.org/docs/stable/fx.html#torch.fx.symbolic_trace). The examples below contain the calls using Eager Mode and FX Graph Mode for comparison.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "In DNNs, eligible candidates for quantization are the FP32 weights (layer parameters) and activations (layer outputs). Quantizing weights reduces the model size. Quantized activations typically result in faster inference.\n\nAs an example, the 50-layer ResNet network has ~26 million weight parameters and computes ~16 million activations in the forward pass.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "Post-Training Dynamic/Weight-only Quantization \nHere the model's weights are pre-quantized; the activations are quantized on-the-fly (\"dynamic\") during inference. The simplest of all approaches, it has a one line API call in `torch.quantization.quantize_dynamic`. Currently only Linear and Recurrent (`LSTM`, `GRU`, `RNN`) layers are supported for dynamic quantization.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "**(+)** Can result in higher accuracies since the clipping range is exactly calibrated for each input [[1]].\n \n **(+)** Dynamic quantization is preferred for models like LSTMs and Transformers where writing/retrieving the model's weights from memory dominate bandwidths [[4]]. \n \n **(-)** Calibrating and quantizing the activations at each layer during runtime can add to the compute overhead. \n\n```python\nimport torch\nfrom torch import nn", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "# toy model\nm = nn.Sequential(\n nn.Conv2d(2, 64, (8,)),\n nn.ReLU(),\n nn.Linear(16,10),\n nn.LSTM(10, 10))\n\nm.eval()", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "EAGER MODE\nfrom torch.quantization import quantize_dynamic\nmodel_quantized = quantize_dynamic(\n model=m, qconfig_spec={nn.LSTM, nn.Linear}, dtype=torch.qint8, inplace=False\n)", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "FX MODE\nfrom torch.quantization import quantize_fx\nqconfig_dict = {\"\": torch.quantization.default_dynamic_qconfig} # An empty key denotes the default applied to all modules\nmodel_prepared = quantize_fx.prepare_fx(m, qconfig_dict)\nmodel_quantized = quantize_fx.convert_fx(model_prepared)\n```", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "Post-Training Static Quantization (PTQ)\nPTQ also pre-quantizes model weights but instead of calibrating activations on-the-fly, the clipping range is pre-calibrated and fixed (\"static\") using validation data. Activations stay in quantized precision between operations during inference. About 100 mini-batches of representative data are sufficient to calibrate the observers [[2]]. The examples below use random data in calibration for convenience - using that in your application will result in bad qparams.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n Fig 4. Steps in Post-Training Static Quantization\n
\n\n\n[Module fusion](https://pytorch.org/tutorials/recipes/fuse.html) combines multiple sequential modules (eg: `[Conv2d, BatchNorm, ReLU]`) into one. Fusing modules means the compiler needs to only run one kernel instead of many; this speeds things up and improves accuracy by reducing quantization error.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "**(+)** Static quantization has faster inference than dynamic quantization because it eliminates the float<->int conversion costs between layers. \n\n**(-)** Static quantized models may need regular re-calibration to stay robust against distribution-drift.\n\n\n```python\n# Static quantization of a model consists of the following steps:", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "# Fuse modules\n# Insert Quant/DeQuant Stubs\n# Prepare the fused module (insert observers before and after layers)\n# Calibrate the prepared module (pass it representative data)\n# Convert the calibrated module (replace with quantized version)\n\nimport torch\nfrom torch import nn\nimport copy\n\nbackend = \"fbgemm\" # running on a x86 CPU. Use \"qnnpack\" if running on ARM.\n\nmodel = nn.Sequential(\n nn.Conv2d(2,64,3),\n nn.ReLU(),\n nn.Conv2d(64, 128, 3),\n nn.ReLU()\n)", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "EAGER MODE\nm = copy.deepcopy(model)\nm.eval()\n\"\"\"Fuse\n- Inplace fusion replaces the first module in the sequence with the fused module, and the rest with identity modules\n\"\"\"\ntorch.quantization.fuse_modules(m, ['0','1'], inplace=True) # fuse first Conv-ReLU pair\ntorch.quantization.fuse_modules(m, ['2','3'], inplace=True) # fuse second Conv-ReLU pair\n\n\"\"\"Insert stubs\"\"\"\nm = nn.Sequential(torch.quantization.QuantStub(), \n *m, \n torch.quantization.DeQuantStub())", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "\"\"\"Prepare\"\"\"\nm.qconfig = torch.quantization.get_default_qconfig(backend)\ntorch.quantization.prepare(m, inplace=True)\n\n\"\"\"Calibrate\n- This example uses random data for convenience. Use representative (validation) data instead.\n\"\"\"\nwith torch.inference_mode():\n for _ in range(10):\n x = torch.rand(1,2, 28, 28)\n m(x)\n \n\"\"\"Convert\"\"\"\ntorch.quantization.convert(m, inplace=True)\n\n\"\"\"Check\"\"\"\nprint(m[[1]].weight().element_size()) # 1 byte instead of 4 bytes for FP32", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "FX GRAPH\nfrom torch.quantization import quantize_fx\nm = copy.deepcopy(model)\nm.eval()\nqconfig_dict = {\"\": torch.quantization.get_default_qconfig(backend)}\n# Prepare\nmodel_prepared = quantize_fx.prepare_fx(m, qconfig_dict)\n# Calibrate - Use representative (validation) data.\nwith torch.inference_mode():\n for _ in range(10):\n x = torch.rand(1,2,28, 28)\n model_prepared(x)\n# quantize\nmodel_quantized = quantize_fx.convert_fx(model_prepared)\n```", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "Quantization-aware Training (QAT)\n\n
\n
\n Fig 5. Steps in Quantization-Aware Training\n
", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "The PTQ approach is great for large models, but accuracy suffers in smaller models [[6]]. This is of course due to the loss in numerical precision when adapting a model from FP32 to the INT8 realm *(Figure 6(a))*. QAT tackles this by including this quantization error in the training loss, thereby training an INT8-first model.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n Fig 6. Comparison of PTQ and QAT convergence [3]\n
", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "All weights and biases are stored in FP32, and backpropagation happens as usual. However in the forward pass, quantization is internally simulated via `FakeQuantize` modules. They are called fake because they quantize and immediately dequantize the data, adding quantization noise similar to what might be encountered during quantized inference. The final loss thus accounts for any expected quantization errors. Optimizing on this allows the model to identify a wider region in the loss function *(Figure", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "6(b))*, and identify FP32 parameters such that quantizing them to INT8 does not significantly affect accuracy.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
Fig 7. Fake Quantization in the forward and backward pass \n
Image source: https://developer.nvidia.com/blog/achieving-fp32-accuracy-for-int8-inference-using-quantization-aware-training-with-tensorrt\n
\n\n**(+)** QAT yields higher accuracies than PTQ.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "**(+)** Qparams can be learned during model training for more fine-grained accuracy (see [LearnableFakeQuantize](https://github.com/pytorch/pytorch/blob/master/torch/ao/quantization/_learnable_fake_quantize.py))\n\n**(-)** Computational cost of retraining a model in QAT can be several hundred epochs [[1]]\n\n\n```python\n# QAT follows the same steps as PTQ, with the exception of the training loop before you actually convert the model to its quantized version\n\nimport torch\nfrom torch import nn", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "backend = \"fbgemm\" # running on a x86 CPU. Use \"qnnpack\" if running on ARM.\n\nm = nn.Sequential(\n nn.Conv2d(2,64,8),\n nn.ReLU(),\n nn.Conv2d(64, 128, 8),\n nn.ReLU()\n)\n\n\"\"\"Fuse\"\"\"\ntorch.quantization.fuse_modules(m, ['0','1'], inplace=True) # fuse first Conv-ReLU pair\ntorch.quantization.fuse_modules(m, ['2','3'], inplace=True) # fuse second Conv-ReLU pair", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "\"\"\"Insert stubs\"\"\"\nm = nn.Sequential(torch.quantization.QuantStub(), \n *m, \n torch.quantization.DeQuantStub())\n\n\"\"\"Prepare\"\"\"\nm.train()\nm.qconfig = torch.quantization.get_default_qconfig(backend)\ntorch.quantization.prepare_qat(m, inplace=True)", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "\"\"\"Training Loop\"\"\"\nn_epochs = 10\nopt = torch.optim.SGD(m.parameters(), lr=0.1)\nloss_fn = lambda out, tgt: torch.pow(tgt-out, 2).mean()\nfor epoch in range(n_epochs):\n x = torch.rand(10,2,24,24)\n out = m(x)\n loss = loss_fn(out, torch.rand_like(out))\n opt.zero_grad()\n loss.backward()\n opt.step()\n\n\"\"\"Convert\"\"\"\nm.eval()\ntorch.quantization.convert(m, inplace=True)", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "Sensitivity Analysis", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "Not all layers respond to quantization equally, some are more sensitive to precision drops than others. Identifying the optimal combination of layers that minimizes accuracy drop is time-consuming, so [[3]] suggest a one-at-a-time sensitivity analysis to identify which layers are most sensitive, and retaining FP32 precision on those. In their experiments, skipping just 2 conv layers (out of a total 28 in MobileNet v1) give them near-FP32 accuracy. Using FX Graph Mode, we can create custom qconfigs to do", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "this easily:", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "```python\n# ONE-AT-A-TIME SENSITIVITY ANALYSIS \n\nfor quantized_layer, _ in model.named_modules():\n print(\"Only quantizing layer: \", quantized_layer)\n\n # The module_name key allows module-specific qconfigs. \n qconfig_dict = {\"\": None, \n \"module_name\":[(quantized_layer, torch.quantization.get_default_qconfig(backend))]}\n\n model_prepared = quantize_fx.prepare_fx(model, qconfig_dict)\n # calibrate\n model_quantized = quantize_fx.convert_fx(model_prepared)\n # evaluate(model)", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "Another approach is to compare statistics of the FP32 and INT8 layers; commonly used metrics for these are SQNR (Signal to Quantized Noise Ratio) and Mean-Squre-Error. Such a comparative analysis may also help in guiding further optimizations. \n\n\n
\n
\n Fig 8. Comparing model weights and activations\n
", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch provides tools to help with this analysis under the Numeric Suite. Learn more about using Numeric Suite from the [full tutorial](https://pytorch.org/tutorials/prototype/numeric_suite_tutorial.html).\n\n```python\n# extract from https://pytorch.org/tutorials/prototype/numeric_suite_tutorial.html\nimport torch.quantization._numeric_suite as ns\n\ndef SQNR(x, y):\n # Higher is better\n Ps = torch.norm(x)\n Pn = torch.norm(x-y)\n return 20*torch.log10(Ps/Pn)", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "wt_compare_dict = ns.compare_weights(fp32_model.state_dict(), int8_model.state_dict())\nfor key in wt_compare_dict:\n print(key, compute_error(wt_compare_dict[key]['float'], wt_compare_dict[key]['quantized'].dequantize()))\n\nact_compare_dict = ns.compare_model_outputs(fp32_model, int8_model, input_data)\nfor key in act_compare_dict:\n print(key, compute_error(act_compare_dict[key]['float'][0], act_compare_dict[key]['quantized'][0].dequantize()))", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "Recommendations for your workflow\n\n
\n
\n Fig 9. Suggested quantization workflow\n
\n Click for larger image ", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "Points to note\n - Large (10M+ parameters) models are more robust to quantization error. [[2]]\n - Quantizing a model from a FP32 checkpoint provides better accuracy than training an INT8 model from scratch.[[2]]\n - Profiling the model runtime is optional but it can help identify layers that bottleneck inference.\n - Dynamic Quantization is an easy first step, especially if your model has many Linear or Recurrent layers.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "- Use symmetric-per-channel quantization with `MinMax` observers for quantizing weights. Use affine-per-tensor quantization with `MovingAverageMinMax` observers for quantizing activations[[2], [3]]\n - Use metrics like SQNR to identify which layers are most suscpetible to quantization error. Turn off quantization on these layers.\n - Use QAT to fine-tune for around 10% of the original training schedule with an annealing learning rate schedule starting at 1% of the initial training learning rate. [[3]]", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "- If the above workflow didn't work for you, we want to know more. Post a thread with details of your code (model architecture, accuracy metric, techniques tried). Feel free to cc me [@suraj.pt](https://discuss.pytorch.org/u/suraj.pt/).", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "That was a lot to digest, congratulations for sticking with it! Next, we'll take a look at quantizing a \"real-world\" model that uses dynamic control structures (if-else, loops). These elements disallow symbolic tracing a model, which makes it a bit tricky to directly quantize the model out of the box. In the next post of this series, we'll get our hands dirty on a model that is chock full of loops and if-else blocks, and even uses third-party libraries in the `forward` call.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "We'll also cover a cool new feature in PyTorch Quantization called Define-by-Run, that tries to ease this constraint by needing only subsets of the model's computational graph to be free of dynamic flow. Check out the [Define-by-Run poster at PTDD'21](https://s3.amazonaws.com/assets.pytorch.org/ptdd2021/posters/C8.png) for a preview.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "References\n[[1]] Gholami, A., Kim, S., Dong, Z., Yao, Z., Mahoney, M. W., & Keutzer, K. (2021). A survey of quantization methods for efficient neural network inference. arXiv preprint arXiv:2103.13630.\n\n[[2]] Krishnamoorthi, R. (2018). Quantizing deep convolutional networks for efficient inference: A whitepaper. arXiv preprint arXiv:1806.08342.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "[[3]] Wu, H., Judd, P., Zhang, X., Isaev, M., & Micikevicius, P. (2020). Integer quantization for deep learning inference: Principles and empirical evaluation. arXiv preprint arXiv:2004.09602.\n\n[[4]] PyTorch Quantization Docs\n\n\n[1]: https://arxiv.org/pdf/2103.13630.pdf\n[2]: https://arxiv.org/pdf/1806.08342.pdf\n[3]: https://arxiv.org/abs/2004.09602\n[4]: https://pytorch.org/docs/stable/quantization.html#prototype-fx-graph-mode-quantization", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'Towards Reproducible Research with PyTorch Hub'\nauthor: Team PyTorch\nredirect_from: /2019/06/10/pytorch_hub.html\n---", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "Reproducibility is an essential requirement for many fields of research including those based on machine learning techniques. However, many machine learning publications are either not reproducible or are difficult to reproduce. With the continued growth in the number of research publications, including tens of thousands of papers now hosted on arXiv and submissions to conferences at an all time high, research reproducibility is more important than ever. While many of these publications are accompanied by", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "code as well as trained models which is helpful but still leaves a number of steps for users to figure out for themselves.", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "We are excited to announce the availability of PyTorch Hub, a simple API and workflow that provides the basic building blocks for improving machine learning research reproducibility. PyTorch Hub consists of a pre-trained model repository designed specifically to facilitate research reproducibility and enable new research. It also has built-in support for [Colab](https://colab.research.google.com/), integration with [*Papers With Code*](https://paperswithcode.com/) and currently contains a broad set of", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "models that include Classification and Segmentation, Generative, Transformers, etc.", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "[Owner] Publishing models", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch Hub supports the publication of pre-trained models (model definitions and pre-trained weights) to a GitHub repository by adding a simple ```hubconf.py``` file.\nThis provides an enumeration of which models are to be supported and a list of dependencies needed to run the models.", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "Examples can be found in the [torchvision](https://github.com/pytorch/vision/blob/master/hubconf.py), [huggingface-bert](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/hubconf.py) and [gan-model-zoo](https://github.com/facebookresearch/pytorch_GAN_zoo) repositories.", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "Let us look at the simplest case: `torchvision`'s `hubconf.py`:\n\n```python\n# Optional list of dependencies required by the package\ndependencies = ['torch']", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "from torchvision.models.alexnet import alexnet\nfrom torchvision.models.densenet import densenet121, densenet169, densenet201, densenet161\nfrom torchvision.models.inception import inception_v3\nfrom torchvision.models.resnet import resnet18, resnet34, resnet50, resnet101, resnet152,\\\nresnext50_32x4d, resnext101_32x8d\nfrom torchvision.models.squeezenet import squeezenet1_0, squeezenet1_1\nfrom torchvision.models.vgg import vgg11, vgg13, vgg16, vgg19, vgg11_bn, vgg13_bn, vgg16_bn, vgg19_bn", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "from torchvision.models.segmentation import fcn_resnet101, deeplabv3_resnet101\nfrom torchvision.models.googlenet import googlenet\nfrom torchvision.models.shufflenetv2 import shufflenet_v2_x0_5, shufflenet_v2_x1_0\nfrom torchvision.models.mobilenet import mobilenet_v2", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "In `torchvision`, the models have the following properties:\n- Each model file can function and be executed independently\n- They dont require any package other than PyTorch (encoded in `hubconf.py` as `dependencies['torch']`)\n- They dont need separate entry-points, because the models when created, work seamlessly out of the box\n\nMinimizing package dependencies reduces the friction for users to load your model for immediate experimentation.", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "A more involved example is HuggingFace's BERT models. Here is their `hubconf.py`\n\n```python\ndependencies = ['torch', 'tqdm', 'boto3', 'requests', 'regex']\n\nfrom hubconfs.bert_hubconf import (\n bertTokenizer,\n bertModel,\n bertForNextSentencePrediction,\n bertForPreTraining,\n bertForMaskedLM,\n bertForSequenceClassification,\n bertForMultipleChoice,\n bertForQuestionAnswering,\n bertForTokenClassification\n)", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "Each model then requires an entrypoint to be created. Here is a code snippet to specify an entrypoint of the ```bertForMaskedLM``` model, which returns the pre-trained model weights.\n\n```python\ndef bertForMaskedLM(*args, **kwargs):\n \"\"\"\n BertForMaskedLM includes the BertModel Transformer followed by the\n pre-trained masked language modeling head.\n Example:\n ...\n \"\"\"\n model = BertForMaskedLM.from_pretrained(*args, **kwargs)\n return model", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "These entry-points can serve as wrappers around complex model factories. They can give a clean and consistent help docstring, have logic to support downloading of pretrained weights (for example via `pretrained=True`) or have additional hub-specific functionality such as visualization.", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "With a `hubconf.py` in place, you can send a pull request based on the template [here](https://github.com/pytorch/hub/blob/master/docs/template.md).\nOur goal is to curate high-quality, easily-reproducible, maximally-beneficial models for research reproducibility.\nHence, we may work with you to refine your pull request and in some cases reject some low-quality models to be published.", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "Once we accept your pull request, your model will soon appear on [Pytorch hub webpage](https://pytorch.org/hub) for all users to explore.", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "[User] Workflow\n\nAs a user, PyTorch Hub allows you to follow a few simple steps and do things like: 1) explore available models; 2) load a model; and 3) understand what methods are available for any given model. Let's walk through some examples of each.", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "Explore available entrypoints.\n\nUsers can list all available entrypoints in a repo using the ```torch.hub.list()``` API.\n\n```python\n>>> torch.hub.list('pytorch/vision')\n>>>\n['alexnet',\n'deeplabv3_resnet101',\n'densenet121',\n...\n'vgg16',\n'vgg16_bn',\n'vgg19',\n 'vgg19_bn']", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "Note that PyTorch Hub also allows auxillary entrypoints (other than pretrained models), e.g. ```bertTokenizer``` for preprocessing in the [BERT](https://pytorch.org/hub/huggingface_pytorch-pretrained-bert_bert/) models, to make the user workflow smoother.", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "Load a model\n\nNow that we know which models are available in the Hub, users can load a model entrypoint using the ```torch.hub.load()``` API. This only requires a single command without the need to install a wheel. In addition the ```torch.hub.help()``` API can provide useful information about how to instantiate the model.\n\n```python\nprint(torch.hub.help('pytorch/vision', 'deeplabv3_resnet101'))\nmodel = torch.hub.load('pytorch/vision', 'deeplabv3_resnet101', pretrained=True)", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "It is also common that repo owners will want to continually add bug fixes or performance improvements. PyTorch Hub makes it super simple for users to get the latest update by calling:\n\n```python\nmodel = torch.hub.load(..., force_reload=True)", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "We believe this will help to alleviate the burden of repetitive package releases by repo owners and instead allow them to focus more on their research.\nIt also ensures that, as a user, you are getting the freshest available models.\n\nOn the contrary, stability is important for users. Hence, some model owners serve them from a specificed branch or tag, rather than the `master` branch, to ensure stability of the code.\nFor example, `pytorch_GAN_zoo` serves them from the `hub` branch:", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "```python\nmodel = torch.hub.load('facebookresearch/pytorch_GAN_zoo:hub', 'DCGAN', pretrained=True, useGPU=False)", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "Note that the ```*args```, ```**kwargs``` passed to `hub.load()` are used to *instantiate* a model. In the above example, `pretrained=True` and `useGPU=False` are given to the model's entrypoint.", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "Explore a loaded model\n\nOnce you have a model from PyTorch Hub loaded, you can use the following workflow to find out the available methods that are supported as well as understand better what arguments are requires to run it.\n\n\n```dir(model)``` to see all available methods of the model. Let's take a look at `bertForMaskedLM`'s available methods.\n\n```python\n>>> dir(model)\n>>>\n['forward'\n...\n'to'\n'state_dict',\n]", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "```help(model.forward)``` provides a view into what arguments are required to make your loaded model run\n\n```python\n>>> help(model.forward)\n>>>\nHelp on method forward in module pytorch_pretrained_bert.modeling:\nforward(input_ids, token_type_ids=None, attention_mask=None, masked_lm_labels=None)\n...", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "Have a closer look at the [BERT](https://pytorch.org/hub/huggingface_pytorch-pretrained-bert_bert/) and [DeepLabV3](https://pytorch.org/hub/pytorch_vision_deeplabv3_resnet101/) pages, where you can see how these models can be used once loaded.", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "Other ways to explore\n\nModels available in PyTorch Hub also support both [Colab](https://colab.research.google.com/github/pytorch/pytorch.github.io/blob/master/assets/hub/facebookresearch_pytorch-gan-zoo_pgan.ipynb) and are directly linked on [Papers With Code](https://paperswithcode.com/) and you can get started with a single click. [Here](https://paperswithcode.com/paper/densely-connected-convolutional-networks) is a good example to get started with (shown below).", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "\n

\n
", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "Additional resources:\n\n* PyTorch Hub API documentation can be found [here](https://pytorch.org/docs/stable/hub.html).\n* Submit a model [here](https://github.com/pytorch/hub) for publication in PyTorch Hub.\n* Go to [https://pytorch.org/hub](https://pytorch.org/hub) to learn more about the available models.\n* Look for more models to come on [paperswithcode.com](https://paperswithcode.com/).", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "A BIG thanks to the folks at HuggingFace, the PapersWithCode team, fast.ai and Nvidia as well as Morgane Riviere (FAIR Paris) and lots of others for helping bootstrap this effort!!\n\nCheers!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "FAQ:\n\n**Q: If we would like to contribute a model that is already in the Hub but perhaps mine has better accuracy, should I still contribute?**\n\n\nA: Yes!! A next step for Hub is to implement an upvote/downvote system to surface the best models.\n\n**Q: Who hosts the model weights for PyTorch Hub?**", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "A: You, as the contributor, are responsible to host the model weights. You can host your model in your favorite cloud storage or, if it fits within the limits, on GitHub. If it is not within your means to host the weights, check with us via opening an issue on the hub repository.\n\n**Q: What if my model is trained on private data? Should I still contribute this model?**", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "A: No! PyTorch Hub is centered around open research and that extends to the usage of open datasets to train these models on. If a pull request for a proprietary model is submitted, we will kindly ask that you resubmit a model trained on something open and available.\n\n**Q: Where are my downloaded models saved?**", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "A: We follow the [XDG Base Directory Specification](https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html) and adhere to common standards around cached files and directories.\n\nThe locations are used in the order of:\n\n* Calling ```hub.set_dir()```\n* ```$TORCH_HOME/hub```, if environment variable ```TORCH_HOME``` is set.\n* ```$XDG_CACHE_HOME/torch/hub```, if environment variable ```XDG_CACHE_HOME``` is set.\n* ```~/.cache/torch/hub```", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'The road to 1.0: production ready PyTorch'\nauthor: The PyTorch Team\nredirect_from: /2018/05/02/road-to-1.0.html\n---", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "We would like to give you a preview of the roadmap for PyTorch 1.0 , the next release of PyTorch. Over the last year, we've had 0.2, 0.3 and 0.4 transform PyTorch from a [Torch+Chainer]-like interface into something cleaner, adding double-backwards, numpy-like functions, advanced indexing and removing Variable boilerplate. At this time, we're confident that the API is in a reasonable and stable state to confidently release a 1.0.\n\nHowever, 1.0 isn't just about stability of the interface.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "One of PyTorch's biggest strengths is its first-class Python integration, imperative style, simplicity of the API and options. These are aspects that make PyTorch good for research and hackability.\n\nOne of its biggest downsides has been production-support. What we mean by production-support is the countless things one has to do to models to run them efficiently at massive scale:", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "- exporting to C++-only runtimes for use in larger projects\n- optimizing mobile systems on iPhone, Android, Qualcomm and other systems\n- using more efficient data layouts and performing kernel fusion to do faster inference (saving 10% of speed or memory at scale is a big win)\n- quantized inference (such as 8-bit inference)", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "Startups, large companies and anyone who wants to build a product around PyTorch have asked for production support. At Facebook (the largest stakeholder for PyTorch) we have Caffe2, which has been the production-ready platform, running in our datacenters and shipping to more than 1 billion phones spanning eight generations of iPhones and six generations of Android CPU architectures. It has server-optimized inference on Intel / ARM, TensorRT support, and all the necessary bits for production. Considering all", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "this value locked-in to a platform that the PyTorch team works quite closely with, **we decided to marry PyTorch and Caffe2 which gives the production-level readiness for PyTorch**.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "Supporting production features without adding usability issues for our researchers and end-users needs creative solutions.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "Production != Pain for researchers\n\nAdding production capabilities involves increasing the API complexity and number of configurable options for models. One configures memory-layouts (NCHW vs NHWC vs N,C/32,H,W,32, each providing different performance characteristics), quantization (8-bit? 3-bit?), fusion of low-level kernels (you used a Conv + BatchNorm + ReLU, let's fuse them into a single kernel), separate backend options (MKLDNN backend for a few layers and NNPACK backend for other layers), etc.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch's central goal is to provide a great platform for research and hackability. So, while we add all these optimizations, we've been working with a hard design constraint to never trade these off against usability.\n\nTo pull this off, we are introducing `torch.jit`, a just-in-time (JIT) compiler that at runtime takes your PyTorch models and rewrites them to run at production-efficiency. The JIT compiler can also export your model to run in a C++-only runtime based on Caffe2 bits.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "> In 1.0, your code continues to work as-is, we're not making any big changes to the existing API.\n\nMaking your model production-ready is an opt-in annotation, which uses the `torch.jit` compiler to export your model to a Python-less environment, and improving its performance. Let's walk through the JIT compiler in detail.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "`torch.jit`: A JIT-compiler for your models", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "We strongly believe that it's hard to match the productivity you get from specifying your models directly as idiomatic Python code. This is what makes PyTorch so flexible, but it also means that PyTorch pretty much never knows the operation you'll run next. This however is a big blocker for export/productionization and heavyweight automatic performance optimizations because they need full upfront knowledge of how the computation will look before it even gets executed.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "We provide two opt-in ways of recovering this information from your code, one based on tracing native python code and one based on compiling a subset of the python language annotated into a python-free intermediate representation. After thorough discussions we concluded that they're both going to be useful in different contexts, and as such you will be able to mix and match them freely.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "Tracing Mode", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "The PyTorch tracer, `torch.jit.trace`, is a function that records all the native PyTorch operations performed in a code region, along with the data dependencies between them. In fact, PyTorch has had a tracer since 0.3, which has been used for exporting models through ONNX. What changes now, is that you no longer necessarily need to take the trace and run it elsewhere - PyTorch can re-execute it for you, using a carefully designed high-performance C++ runtime. As we develop PyTorch 1.0 this runtime will", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "integrate all the optimizations and hardware integrations that Caffe2 provides.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "The biggest benefit of this approach is that it doesn't really care how your Python code is structured \u2014 you can trace through generators or coroutines, modules or pure functions. Since we only record native PyTorch operators, these details have no effect on the trace recorded. This behavior, however, is a double-edged sword. For example, if you have a loop in your model, it will get unrolled in the trace, inserting a copy of the loop body for as many times as the loop ran. This opens up opportunities for", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "zero-cost abstraction (e.g. you can loop over modules, and the actual trace will be loop-overhead free!), but on the other hand this will also affect data dependent loops (think of e.g. processing sequences of varying lengths), effectively hard-coding a single length into the trace.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "For networks that do not contain loops and if statements, tracing is non-invasive and is robust enough to handle a wide variety of coding styles. This code example illustrates what tracing looks like:", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "```python\n# This will run your nn.Module or regular Python function with the example\n# input that you provided. The returned callable can be used to re-execute\n# all operations that happened during the example run, but it will no longer\n# use the Python interpreter.\nfrom torch.jit import trace\ntraced_model = trace(model, example_input=input)\ntraced_fn = trace(fn, example_input=input)", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "# The training loop doesn't change. Traced model behaves exactly like an\n# nn.Module, except that you can't edit what it does or change its attributes.\n# Think of it as a \"frozen module\".\nfor input, target in data_loader:\n loss = loss_fn(traced_model(input), target)\n```", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "Script Mode\n\nTracing mode is a great way to minimize the impact on your code, but we're also very excited about the models that fundamentally make use of control flow such as RNNs. Our solution to this is a scripting mode.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "In this case you write out a regular Python function, except that you can no longer use certain more complicated language features. Once you isolated the desired functionality, you let us know that you'd like the function to get compiled by decorating it with an `@script` decorator. This annotation will transform your python function directly into our high-performance C++ runtime. This lets us recover all the PyTorch operations along with loops and conditionals. They will be embedded into our internal", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "representation of this function, and will be accounted for every time this function is run.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "```python\nfrom torch.jit import script\n\n@script\ndef rnn_loop(x):\n hidden = None\n for x_t in x.split(1):\n x, hidden = model(x, hidden)\n return x\n```", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "Optimization and Export\n\nRegardless of whether you use tracing or `@script`, the result is a python-free representation of your model, which can be used to optimize the model or to export the model from python for use in production environments.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "Extracting bigger segments of the model into an intermediate representation makes it possible to do sophisticated whole-program optimizations and to offload computation to specialized AI accelerators which operate on graphs of computation. We have already been developing the beginnings of these optimizations, including passes that fuse GPU operations together to improve the performance of smaller RNN models.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "It also allows us to use existing high-performance backends available in Caffe2 today to run the model efficiently. Additionally, @script functions (and modules!) can be fully exported to ONNX in a way that retains their dynamic nature, such that you can easily run them in a Python-free environment using the model executors from Caffe2 or by transferring the model to any other framework supporting ONNX.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "Usability\n\n**We care deeply about maintaining our current level of usability and we know that execution of the code not directly in Python leads to harder debugging, but this is something that we think about a lot, and we're making sure that you're not getting locked in to a completely different programming language.**", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "First, we follow the principle of pay for what you use \u2014 if you don't need to optimize or export your model, you do not have to use these new features and won't see any downsides. Furthermore, use of traced or @script modules/functions can be done incrementally. For instance, all of these behaviors are allowed: You can trace part of your model and use the trace in a larger non-traced model. You can use tracing for 90% of your model, and use @script for the one sub-module that actually has some control flow", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "in it. You can write a function using @script and have it call a native python function. If something appears incorrect in an @script function, you can remove the annotation and the code will execute in native python where it is easy to debug using your favorite tools and methods. Think of tracing and @script like type annotations using MyPy or TypeScript \u2014 each additional annotation can be tested incrementally, and none are required until you want to optimize or productionize.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "Most importantly, these modes will be built into the core of PyTorch so that mixing and matching them with your existing code can happen seamlessly.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "_Note: The name JIT for these components is a bit of a misnomer and comes from historical reasons. The tracing/function execution in PyTorch started out as an optimizing JIT compiler that generated fused CUDA kernels but then grew to encompass optimization, @script, and export. When it is ready for release we will likely rename this functionality to the hybrid frontend, but we wanted to present it here as it is named in the code so that you can follow along as we develop it._", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "Other changes and improvements\n\nProduction support is the big feature for 1.0, but we will continue optimizing and fixing other parts of PyTorch as course of the standard release process.\n\nOn the backend side of things, PyTorch will see some changes, which might affect user-written C and C++ extensions. We are replacing (or refactoring) the backend ATen library to incorporate features and optimizations from Caffe2.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "Last Words\n\nWe aim to release 1.0 some time during the summer. You can follow-along our progress on the [Pull Requests](https://github.com/pytorch/pytorch/pulls) page.\n\nYou can read this from the perspective of the Caffe2 project at: [https://caffe2.ai/blog/2018/05/02/Caffe2_PyTorch_1_0.html](https://caffe2.ai/blog/2018/05/02/Caffe2_PyTorch_1_0.html)", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch adds new tools and libraries, welcomes Preferred Networks to its community'\nauthor: Team PyTorch\n---", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch continues to be used for the latest state-of-the-art research on display at the NeurIPS conference next week, making up nearly [70% of papers](https://chillee.github.io/pytorch-vs-tensorflow/) that cite a framework. In addition, we\u2019re excited to welcome Preferred Networks, the maintainers of the Chainer framework, to the PyTorch community. Their teams are moving fully over to PyTorch for developing their ML capabilities and services.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "This growth underpins PyTorch\u2019s focus on building for the needs of the research community, and increasingly, supporting the full workflow from research to production deployment. To further support researchers and developers, we\u2019re launching a number of new tools and libraries for large scale computer vision and elastic fault tolerant training. Learn more on GitHub and at our NeurIPS booth.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "Preferred Networks joins the PyTorch community\n\nPreferred Networks, Inc. (PFN) announced plans to move its deep learning framework from Chainer to PyTorch. As part of this change, PFN will collaborate with the PyTorch community and contributors, including people from Facebook, Microsoft, CMU, and NYU, to participate in the development of PyTorch.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "PFN developed Chainer, a deep learning framework that introduced the concept of define-by-run (also referred to as eager execution), to support and speed up its deep learning development. Chainer has been used at PFN since 2015 to rapidly solve real-world problems with the latest, cutting-edge technology. Chainer was also one of the inspirations for PyTorch\u2019s initial design, as outlined in the [PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "NeurIPS](https://papers.nips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library) paper.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "PFN has driven innovative work with [CuPy](https://cupy.chainer.org/), ImageNet in 15 minutes, [Optuna](https://optuna.org/), and other projects that have pushed the boundaries of design and engineering. As part of the PyTorch community, PFN brings with them creative engineering capabilities and experience to help take the framework forward. In addition, PFN\u2019s migration to PyTorch will allow it to efficiently incorporate the latest research results to accelerate its R&D activities, [given PyTorch\u2019s broad", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "adoption with researchers](https://thegradient.pub/state-of-ml-frameworks-2019-pytorch-dominates-research-tensorflow-dominates-industry/), and to collaborate with the community to add support for PyTorch on MN-Core, a deep learning processor currently in development.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "We are excited to welcome PFN to the PyTorch community, and to jointly work towards the common goal of furthering advances in deep learning technology. Learn more about the PFN\u2019s migration to PyTorch [here](https://preferred.jp/en/news/pr20191205/).", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "Tools for elastic training and large scale computer vision", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch Elastic (Experimental)\n\nLarge scale model training is becoming commonplace with architectures like BERT and the growth of model parameter counts into the billions or even tens of billions. To achieve convergence at this scale in a reasonable amount of time, the use of distributed training is needed.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "The current PyTorch Distributed Data Parallel (DDP) module enables data parallel training where each process trains the same model but on different shards of data. It enables bulk synchronous, multi-host, multi-GPU/CPU execution of ML training. However, DDP has several shortcomings; e.g. jobs cannot start without acquiring all the requested nodes; jobs cannot continue after a node fails due to error or transient issue; jobs cannot incorporate a node that joined later; and lastly; progress cannot be made", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "with the presence of a slow/stuck node.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "The focus of [PyTorch Elastic](https://github.com/pytorch/elastic), which uses Elastic Distributed Data Parallelism, is to address these issues and build a generic framework/APIs for PyTorch to enable reliable and elastic execution of these data parallel training workloads. It will provide better programmability, higher resilience to failures of all kinds, higher-efficiency and larger-scale training compared with pure DDP.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "Elasticity, in this case, means both: 1) the ability for a job to continue after node failure (by running with fewer nodes and/or by incorporating a new host and transferring state to it); and 2) the ability to add/remove nodes dynamically due to resource availability changes or bottlenecks.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "While this feature is still experimental, you can try it out on AWS EC2, with the instructions [here](https://github.com/pytorch/elastic/tree/master/aws). Additionally, the PyTorch distributed team is working closely with teams across AWS to support PyTorch Elastic training within services such as Amazon Sagemaker and Elastic Kubernetes Service (EKS). Look for additional updates in the near future.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "New Classification Framework\n\nImage and video classification are at the core of content understanding. To that end, you can now leverage a new end-to-end framework for large-scale training of state-of-the-art image and video classification models. It allows researchers to quickly prototype and iterate on large distributed training jobs at the scale of billions of images. Advantages include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "* Ease of use - This framework features a modular, flexible design that allows anyone to train machine learning models on top of PyTorch using very simple abstractions. The system also has out-of-the-box integration with AWS on PyTorch Elastic, facilitating research at scale and making it simple to move between research and production.\n* High performance - Researchers can use the framework to train models such as Resnet50 on ImageNet in as little as 15 minutes.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "You can learn more at the [NeurIPS Expo workshop](https://nips.cc/ExpoConferences/2019/schedule?workshop_id=16) on Multi-Modal research to production or get started with the PyTorch Elastic Imagenet example [here](https://github.com/pytorch/elastic/blob/master/examples/imagenet/main.py).", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "Come see us at NeurIPS\n\nThe PyTorch team will be hosting workshops at NeurIPS during the industry expo on 12/8. Join the sessions below to learn more, and visit the team at the PyTorch booth on the show floor and during the Poster Session. At the booth, we\u2019ll be walking through an interactive demo of PyTorch running fast neural style transfer on a Cloud TPU - here\u2019s a [sneak peek](https://colab.research.google.com/github/pytorch/xla/blob/master/contrib/colab/style_transfer_inference-xrt-1-15.ipynb).", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "We\u2019re also publishing a [paper that details the principles that drove the implementation of PyTorch](https://papers.nips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library) and how they\u2019re reflected in its architecture.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "*[Multi-modal Research to Production](https://nips.cc/ExpoConferences/2019/schedule?workshop_id=16)* - This workshop will dive into a number of modalities such as computer vision (large scale image classification and instance segmentation) and Translation and Speech (seq-to-seq Transformers) from the lens of taking cutting edge research to production. Lastly, we will also walk through how to use the latest APIs in PyTorch to take eager mode developed models into graph mode via Torchscript and quantize them", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "for scale production deployment on servers or mobile devices. Libraries used include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "* Classification Framework - a newly open sourced PyTorch framework developed by Facebook AI for research on large-scale image and video classification. It allows researchers to quickly prototype and iterate on large distributed training jobs. Models built on the framework can be seamlessly deployed to production.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "* Detectron2 - the recently released object detection library built by the Facebook AI Research computer vision team. We will articulate the improvements over the previous version including: 1) Support for latest models and new tasks; 2) Increased flexibility, to enable new computer vision research; 3) Maintainable and scalable, to support production use cases.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "* Fairseq - general purpose sequence-to-sequence library, can be used in many applications, including (unsupervised) translation, summarization, dialog and speech recognition.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
+{"page_content": "nvFuser is a Deep Learning Compiler for NVIDIA GPUs that automatically just-in-time compiles fast and flexible kernels to reliably accelerate users' networks. It provides significant speedups for deep learning networks running on Volta and later CUDA accelerators by generating fast custom \u201cfusion\u201d kernels at runtime. nvFuser is specifically designed to meet the unique requirements of the PyTorch community, and it supports diverse network architectures and programs with dynamic inputs of varying shapes and strides.\nIn this blog post we\u2019ll describe nvFuser and how it\u2019s used today, show the significant performance improvements it can obtain on models from HuggingFace and TIMM, and look ahead to nvFuser in PyTorch 1.13 and beyond. If you would like to know more about how and why fusion improves the speed of training for Deep Learning networks, please see our previous talks on nvFuser from [GTC 2022](https://www.nvidia.com/en-us/on-demand/session/gtcspring22-s41958/) and [GTC 2021](https://www.nvidia.com/en-us/on-demand/session/gtcspring21-s31952/).\nnvFuser relies on a graph representation of PyTorch operations to optimize and accelerate. Since PyTorch has an eager execution model, the PyTorch operations users are running are not directly accessible as a whole program that can be optimized by a system like nvFuser. Therefore users must utilize systems built on top of nvFuser which are capable of capturing users programs and translating them into a form that is optimizable by nvFuser. These higher level systems then pass these captured operations to nvFuser, so that nvFuser can optimize the execution of the user\u2019s script for NVIDIA GPUs. There are three systems that capture, translate, and pass user programs to nvFuser for optimization:", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "- [TorchScript jit.script](https://pytorch.org/docs/stable/generated/torch.jit.script.html#torch.jit.script)\n - This system directly parses sections of an annotated python script to translate into its own representation what the user is doing. This system then applies its own version of auto differentiation to the graph, and passes sections of the subsequent forward and backwards graphs to nvFuser for optimization.\n- [FuncTorch](https://pytorch.org/functorch/stable/generated/functorch.compile.memory_efficient_fusion.html#functorch.compile.memory_efficient_fusion)\n - This system doesn\u2019t directly look at the user python script, instead inserting a mechanism that captures PyTorch operations as they\u2019re being run. We refer to this type of capture system as \u201ctrace program acquisition\u201d, since we\u2019re tracing what has been performed. FuncTorch doesn\u2019t perform its own auto differentiation \u2013 it simply traces PyTorch\u2019s autograd directly to get backward graphs.\n- [TorchDynamo](https://github.com/pytorch/torchdynamo)\n - TorchDynamo is another program acquisition mechanism built on top of FuncTorch. TorchDynamo parses the Python bytecode produced from the user script in order to select portions to trace with FuncTorch. The benefit of TorchDynamo is that it\u2019s able to apply decorators to a user\u2019s script, effectively isolating what should be sent to FuncTorch, making it easier for FuncTorch to successfully trace complex Python scripts.", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "These systems are available for users to interact with directly while nvFuser automatically and seamlessly optimizes performance critical regions of the user\u2019s code. These systems automatically send parsed user programs to nvFuser so nvFuser can:\n\n1. Analyze the operations being run on GPUs\n2. Plan parallelization and optimization strategies for those operations\n3. Apply those strategies in generated GPU code\n4. Runtime-compile the generated optimized GPU functions\n5. Execute those CUDA kernels on subsequent iterations", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "It is important to note nvFuser does not yet support all PyTorch operations, and there are still some scenarios that are actively being improved in nvFuser that are discussed herein. However, nvFuser does support many DL performance critical operations today, and the number of supported operations will grow in subsequent PyTorch releases. nvFuser is capable of generating highly specialized and optimized GPU functions for the operations it does have support for. This means nvFuser is able to power new PyTorch systems like TorchDynamo and FuncTorch to combine the flexibility PyTorch is known for with unbeatable performance.\n\n## nvFuser Performance", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Before getting into how to use nvFuser, in this section we\u2019ll show the improvements in training speed nvFuser provides for a variety of models from the [HuggingFace Transformers](https://github.com/huggingface/transformers) and [PyTorch Image Models (TIMM)](https://github.com/rwightman/pytorch-image-models) repositories and we will discuss current gaps in nvFuser performance that are under development today. All performance numbers in this section were taken using an NVIDIA A100 40GB GPU, and used either FuncTorch alone or Functorch with TorchDynamo.\n\n## HuggingFace Transformer Benchmarks\n\nnvFuser can dramatically accelerate training of HuggingFace Transformers when combined with another important optimization (more on that in a moment). Performance improvements can be seen in Figure 1 to range between 1.12x and 1.50x across a subset of popular HuggingFace Transformer networks.\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "\nFigure 1: Performance gains of 8 training scenarios from HuggingFace\u2019s Transformer repository. First performance boost in the dark green is due to replacing the optimizer with an NVIDIA Apex fused AdamW optimizer. The light green is due to adding nvFuser. Models were run with batch size and sequence lengths of [64, 128], [8, 512], [2, 1024], [64, 128], [8, 512], [8, src_seql=512, tgt_seql=128], [8, src_seql=1024, tgt_seql=128], and [8, 512] respectively. All networks were run with Automatic Mixed Precision (AMP) enabled with dtype=float16.\n
", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "While these speedups are significant, it\u2019s important to understand that nvFuser doesn\u2019t (yet) automate everything about running networks quickly. For HuggingFace Transformers, for example, it was important to use the AdamW fused optimizer from [NVIDIA\u2019s Apex repository](https://github.com/NVIDIA/apex) as the optimizer otherwise consumed a large portion of runtime. Using the fused AdamW optimizer to make the network faster exposes the next major performance bottleneck \u2014 memory bound operations. These operations are optimized by nvFuser, providing another large performance boost. With the fused optimizer and nvFuser enabled, the training speed of these networks improved between 1.12x to 1.5x.\nHuggingFace Transformer models were run with [the torch.amp module](https://pytorch.org/docs/stable/amp.html). (\u201camp\u201d stands for Automated Mixed Precision, see the [\u201cWhat Every User Should Know about Mixed Precision in PyTorch\u201d](https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/) blog post for details.) An option to use nvFuser was added to HuggingFace\u2019sTrainer. If you have [TorchDynamo installed](https://github.com/pytorch/torchdynamo#requirements-and-setup) you can activate it to enable nvFuser in HuggingFace by passing *torchdynamo = \u2018nvfuser\u2019* to the Trainer class.\nnvFuser has great support for normalization kernels and related fusions frequently found in Natural Language Processing (NLP) models, and it is recommended users try nvFuser in their NLP workloads.", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "## PyTorch Image Models (TIMM) Benchmarks\nnvFuser, can also significantly reduce the training time of TIMM networks, up to over 1.3x vs. eager PyTorch, and up to 1.44x vs. eager PyTorch when combined with the torch.amp module. Figure 1 shows nvFuser\u2019s speedup without torch.amp, and when torch.amp is used with the NHWC (\u201cchannels last\u201d) and NCHW (\u201cchannels first\u201d) formats. nvFuser is integrated in TIMM through FuncTorch tracing directly (without TorchDynamo) and can be used by adding the [--aot-autograd command line argument](https://github.com/rwightman/pytorch-image-models/commit/ca991c1fa57373286b9876aa63370fd19f5d6032) when running the TIMM benchmark or training script.\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "\nFigure 1: The Y-axis is the performance gain nvFuser provides over not using nvFuser. A value of 1.0 means no change in perf, 2.0 would mean nvFuser is twice as fast, 0.5 would mean nvFuser takes twice the time to run. Square markers are with float16 Automatic Mixed Precision (AMP) and channels first contiguous inputs, circle markers are float32 inputs, and triangles are with float16 AMP and channels last contiguous inputs. Missing data points are due to an error being encountered when tracing.\n
", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "When running with float32 precision nvFuser provides a 1.12x geometric mean (\u201cgeomean\u201d) speedup on TIMM networks, and when running with torch.amp and \u201cchannels first\u201d it provides a 1.14x geomean speedup. However, nvFuser currently doesn\u2019t speedup torch.amp and \u201cchannels last\u201d training (a .9x geomean regression), so we recommend not using it in those cases. We are actively working on improving \u201cchannels last\u201d performance now, and soon we will have two additional optimization strategies (grid persistent optimizations for channels-last normalizations and fast transposes) which we expect will provide speedups comparable to \u201cchannels first\u201d in PyTorch version 1.13 and later. Many of nvFuser\u2019s optimizations can also help in inference cases. However, in PyTorch when running inference on small batch sizes, the performance is typically limited by CPU overhead, which nvFuser can\u2019t completely remove or fix. Therefore, typically the most important optimization for inference is to enable [CUDA Graphs](https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/) when possible. Once CUDA Graphs is enabled, then it can also be beneficial to also enable fusion through nvFuser. Performance of inference is shown in Figure 2 and Figure 3. Inference is only run with float16 AMP as it is uncommon to run inference workloads in full float32 precision.", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
\n\n\n
\n
\n\n\nFigure 2: Performance gains of enabling CUDA Graphs, and CUDA Graphs with nvFuser compared to the performance of native PyTorch without CUDA Graphs and nvFuser across TIMM models with float16 AMP, channels first inputs, and a batch size of 1 and 8 respectively. There is a geomean speedup of 2.74x with CUDA Graphs and 2.71x with CUDA Graphs + nvFuser respectively. nvFuser provides a maximum regression of 0.68x and a maximum performance gain of 2.74x (relative to CUDA Graphs without nvFuser). Performance gain is measured relative to the average time per iteration PyTorch takes without CUDA Graphs and without nvFuser. Models are sorted by how much additional performance nvFuser is providing.\n
", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
\n\n\n
\n
\n\n\nFigure 3: Performance gains of enabling CUDA Graphs, and CUDA Graphs with nvFuser compared to the performance of native PyTorch without CUDA Graphs and nvFuser across TIMM models with AMP, channels last inputs, and a batch size of 1 and 8 respectively. There is a geomean speedup of 2.29x with CUDA Graphs and 2.95x with CUDA Graphs + nvFuser respectively. nvFuser provides a maximum regression of 0.86x and a maximum performance gain of 3.82x (relative to CUDA Graphs without nvFuser). Performance gain is measured relative to the average time per iteration PyTorch takes without CUDA Graphs and without nvFuser. Models are sorted by how much additional performance nvFuser is providing.\n
", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "So far nvFuser performance has not been tuned for inference workloads so its performance benefit is not consistent across all cases. However, there are still many models that benefit significantly from nvFuser during inference and we encourage users to try nvFuser in inference workloads to see if you would benefit today. Performance of nvFuser in inference workloads will improve in the future and if you\u2019re interested in nvFuser in inference workloads please reach out to us on the PyTorch forums.\n\n## Getting Started - Accelerate Your Scripts with nvFuser", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "We\u2019ve created [a tutorial](https://pytorch.org/tutorials/intermediate/nvfuser_intro_tutorial.html) demonstrating how to take advantage of nvFuser to accelerate part of a standard transformer block, and how nvFuser can be used to define fast and novel operations. There are still some rough edges in nvFuser that we\u2019re working hard on improving as we\u2019ve outlined in this blog post. However we\u2019ve also demonstrated some great improvements for training speed on multiple networks in HuggingFace and TIMM and we expect there are opportunities in your networks where nvFuser can help today, and many more opportunities it will help in the future.\nIf you would like to learn more about nvFuser we recommend watching our presentations from NVIDIA\u2019s GTC conference [GTC 2022](https://www.nvidia.com/en-us/on-demand/session/gtcspring22-s41958/) and [GTC 2021](https://www.nvidia.com/en-us/on-demand/session/gtcspring21-s31952/).", "metadata": {"source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'Introducing PyTorch Profiler - the new and improved performance tool'\nauthor: Maxim Lukiyanov - Principal PM at Microsoft, Guoliang Hua - Principal Engineering Manager at Microsoft, Geeta Chauhan - Partner Engineering Lead at Facebook, Gisle Dankel - Tech Lead at Facebook\n---\n\nAlong with [PyTorch 1.8.1 release](https://github.com/pytorch/pytorch/releases/tag/v1.8.1), we are excited to announce PyTorch Profiler \u2013 the new and improved performance debugging profiler for PyTorch. Developed as part of a collaboration between Microsoft and Facebook, the PyTorch Profiler is an open-source tool that enables accurate and efficient performance analysis and troubleshooting for large-scale deep learning models.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
+{"page_content": "Analyzing and improving large-scale deep learning model performance is an ongoing challenge that grows in importance as the model sizes increase. For a long time, PyTorch users had a hard time solving this challenge due to the lack of available tools. There were standard performance debugging tools that provide GPU hardware level information but missed PyTorch-specific context of operations. In order to recover missed information, users needed to combine multiple tools together or manually add minimum correlation information to make sense of the data. There was also the autograd profiler (```torch.autograd.profiler```) which can capture information about PyTorch operations but does not capture detailed GPU hardware-level information and cannot provide support for visualization.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
+{"page_content": "The new PyTorch Profiler (```torch.profiler```) is a tool that brings both types of information together and then builds experience that realizes the full potential of that information. This new profiler collects both GPU hardware and PyTorch related information, correlates them, performs automatic detection of bottlenecks in the model, and generates recommendations on how to resolve these bottlenecks. All of this information from the profiler is visualized for the user in TensorBoard. The new Profiler API is natively supported in PyTorch and delivers the simplest experience available to date where users can profile their models without installing any additional packages and see results immediately in TensorBoard with the new PyTorch Profiler plugin. Below is the screenshot of PyTorch Profiler - automatic bottleneck detection. \n\n\n

\n
\n\n## Getting started", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
+{"page_content": "## Getting started\n\nPyTorch Profiler is the next version of the PyTorch autograd profiler. It has a new module namespace ```torch.profiler``` but maintains compatibility with autograd profiler APIs. The Profiler uses a new GPU profiling engine, built using Nvidia CUPTI APIs, and is able to capture GPU kernel events with high fidelity. To profile your model training loop, wrap the code in the profiler context manager as shown below.\n\n```python\n with torch.profiler.profile(\n schedule=torch.profiler.schedule(\n wait=2,\n warmup=2,\n active=6,\n repeat=1),\n on_trace_ready=tensorboard_trace_handler,\n with_stack=True\n) as profiler:\n for step, data in enumerate(trainloader, 0):\n print(\"step:{}\".format(step))\n inputs, labels = data[0].to(device=device), data[1].to(device=device)\n\n outputs = model(inputs)\n loss = criterion(outputs, labels)", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
+{"page_content": "optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n profiler.step()\n```\nThe ```schedule``` parameter allows you to limit the number of training steps included in the profile to reduce the amount of data collected and simplify visual analysis by focusing on what\u2019s important. The ```tensorboard_trace_handler``` automatically saves profiling results to disk for analysis in TensorBoard.\n\nTo view results of the profiling session in TensorBoard, install PyTorch Profiler TensorBoard Plugin package.", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
+{"page_content": "```python\npip install torch_tb_profiler\n```\n## Visual Studio Code Integration\n[Microsoft Visual Studio Code](https://code.visualstudio.com/) is one of the most popular code editors for Python developers and data scientists. The [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for VS Code recently added the integration of TensorBoard into the code editor, including support for the PyTorch Profiler. Once you have VS Code and the Python extension installed, you can quickly open the TensorBoard Profiler plugin by launching the Command Palette using the keyboard shortcut CTRL + SHIFT + P (CMD + SHIFT + P on a Mac) and typing the \u201cLaunch TensorBoard\u201d command.\n\n\n

\n
", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
+{"page_content": "This integration comes with a built-in lifecycle management feature. VS Code will install the TensorBoard package and the PyTorch Profiler plugin package (coming in mid-April) automatically if you don\u2019t have them on your system. VS Code will also launch TensorBoard process for you and automatically look for any TensorBoard log files within your current directory. When you\u2019re done, just close the tab and VS Code will automatically close the process. No more Terminal windows running on your system to provide a backend for the TensorBoard UI! Below is PyTorch Profiler Trace View running in TensorBoard.\n\n\n

\n
\n\nLearn more about TensorBoard support in VS Code in [this blog](https://devblogs.microsoft.com/python/python-in-visual-studio-code-february-2021-release/).\n\n## Feedback", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
+{"page_content": "## Feedback\n\nReview [PyTorch Profiler documentation](https://pytorch.org/docs/stable/profiler.html), give Profiler a try and let us know about your experience. Provide your feedback on [PyTorch Discussion Forum](https://discuss.pytorch.org/) or file issues on [PyTorch GitHub](https://github.com/pytorch/pytorch).", "metadata": {"source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"How Disney Improved Activity Recognition Through Multimodal Approaches with PyTorch\"\nauthor: Monica Alfaro, Albert Aparicio, Francesc Guitart, Marc Junyent, Pablo Pernias, Marcel Porta, and Miquel \u00c0ngel Farr\u00e9 (former Senior Technology Manager)\nfeatured-img: 'assets/images/disney_media_logo.jpg'\n---\n\n# Introduction\n\nAmong the many things Disney Media & Entertainment Distribution (DMED) is responsible for, is the management and distribution of a huge array of media assets including news, sports, entertainment and features, episodic programs, marketing and advertising and more.\n\n\n\n\n
\n
\n\n\n\nOur team focuses on media annotation as part of DMED Technology\u2019s content platforms group. In our day-to-day work, we automatically analyze a variety of content that constantly challenges the efficiency of our machine learning workflow and the accuracy of our models.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Several of our colleagues recently discussed the workflow efficiencies that we achieved by switching to an end-to-end video analysis pipeline using PyTorch, as well as how we approach animated character recognition. We invite you to read more about both in this previous post.\n\nWhile the conversion to an end-to-end PyTorch pipeline is a solution that any company might benefit from, animated character recognition was a uniquely-Disney concept and solution.\n\nIn this article we will focus on activity recognition, which is a general challenge across industries \u2014 but with some specific opportunities when leveraged in the media production field, because we can combine audio, video, and subtitles to provide a solution.\n\n# Experimenting with Multimodality", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Working on a multimodal problem adds more complexity to the usual training pipelines. Having multiple information modes for each example means that the multimodal pipeline has to have specific implementations to process each mode in the dataset. Usually after this processing step, the pipeline has to merge or fuse the outputs.\n\nOur initial experiments in multimodality were completed using the [MMF framework](https://github.com/facebookresearch/mmf). MMF is a modular framework for vision and language multimodal research. MMF contains reference implementations of state-of-the-art vision and language models and has also powered multiple research projects at Meta AI Research (as seen in this [poster](https://assets.pytorch.org/pted2021/posters/A3.png) presented in PyTorch Ecosystem Day 2020). Along with the recent release of TorchMultimodal, a PyTorch library for training state-of-the-art multimodal models at scale, MMF highlights the growing interest in Multimodal understanding.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "MMF tackles this complexity with modular management of all the elements of the pipeline through a wide set of different implementations for specific modules, ranging from the processing of the modalities to the fusion of the processed information.\n\nIn our scenario, MMF was a great entry point to experiment with multimodality. It allowed us to iterate quickly by combining audio, video and closed captioning and experiment at different levels of scale with certain multimodal models, shifting from a single GPU to TPU Pods.\n\n# Multimodal Transformers\n\nWith a workbench based on MMF, our initial model was based on a concatenation of features from each modality evolving to a pipeline that included a Transformer-based fusion module to combine the different input modes.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Specifically, we made use of the fusion module called MMFTransformer, developed in collaboration with the Meta AI Research team. This is an implementation based on [VisualBERT](https://arxiv.org/abs/1908.03557) for which the necessary modifications were added to be able to work with text, audio and video.\n\nDespite having decent results with the out-of-box implementation MMFTransformer, we were still far from our goal, and the Transformers-based models required more data than we had available.\n\n# Searching for less data-hungry solutions\n\nSearching for less data-hungry solutions, our team started studying [MLP-Mixer](https://arxiv.org/abs/2105.01601). This new architecture has been proposed by the Google Brain team and it provides an alternative to well established de facto architectures like convolutions or self-attention for computer vision tasks.\n\n## MLP-Mixer", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "## MLP-Mixer\n\nThe core idea behind mixed variations consists of replacing the convolutions or self-attention mechanisms used in transformers with Multilayer Perceptrons. This change in architecture favors the performance of the model in high data regimes (especially with respect to the Transformers), while also opening some questions regarding the inductive biases hidden in the convolutions and the self-attention layers.\n\nThose proposals perform great in solving image classification tasks by splitting the image in chunks, flattening those chunks into 1D vectors and passing them through a sequence of Mixer Layers.\n\n\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Inspired by the advantages of Mixer based architectures, our team searched for parallelisms with the type of problems we try to solve in video classification: specifically, instead of a single image, we have a set of frames that need to be classified, along with audio and closed captioning in the shape of new modalities.\n\n# Activity Recognition reinterpreting the MLP-Mixer\n\nOur proposal takes the core idea of the [MLP-Mixer](https://arxiv.org/abs/2105.01601) \u2014 using multiple multi-layer perceptrons on a sequence and transposed sequence and extends it into a Multi Modal framework that allows us to process video, audio & text with the same architecture.\n\nFor each of the modalities, we use different extractors that will provide embeddings describing the content. Given the embeddings of each modality, the MLP-Mixer architecture solves the problem of deciding which of the modalities might be the most important, while also weighing how much each modality contributes to the final labeling.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "For example, when it comes to detecting laughs, sometimes the key information is in audio or in the frames, and in some of the cases we have a strong signal in the closed caption.\n\nWe tried processing each frame separately with a ResNet34 and getting a sequence of embeddings and by using a video-specific model called R3D, both pre-trained on ImageNet and Kinetics400 respectively.\n\n\n\n\n
\n
\n\n\n\nTo process the audio, we use the pretrained ResNet34, and we remove the final layers to be able to extract 2D embeddings from the audio spectrograms (for 224x224 images we end up with 7x7 embeddings).\n\n\n\n\n
\n
\n\n\n\nFor closed captioning, we are using a pre-trained BERT-large, with all layers frozen, except for the Embeddings & LayerNorms.\n\n\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Once we have extracted the embedding from each modality, we concatenate them into a single sequence and pass it through a set of MLP-Mixer blocks; next we use average pooling & a classification head to get predictions.\n\n\n\n\n
\n
\n\n\n\nOur experiments have been performed on a custom, manually labeled dataset for activity recognition with 15 classes, which we know from experiments are hard and cannot all be predicted accurately using a single modality.\n\nThese experiments have shown a significant increase in performance using our approach, especially in a low/mid-data regime (75K training samples).\n\nWhen it comes to using only Text and Audio, our experiments showed a 15 percent improvement in accuracy over using a classifier on top of the features extracted by state-of-the-art backbones.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Using Text, Audio and Video we have seen a 17 percent improvement in accuracy over using Meta AIFacebook\u2019s MMF Framework, which uses a VisualBERT-like model to combine modalities using more powerful state of the art backbones.\n\nCurrently, we extended the initial model to cover up to 55 activity classes and 45 event classes. One of the challenges we expect to improve upon in the future is to include all activities and events, even those that are less frequent.\n\n## Interpreting the MLP-Mixer mode combinations \n\nAn MLP-Mixer is a concatenation of MultiLayer Perceptrons. This can be, very roughly, approximated to a linear operation, in the sense that, once trained, the weights are fixed and the input will directly affect the output.\n\nOnce we assume that approximation, we also assume that for an input consisting of NxM numbers, we could find a NxM matrix that (when multiplied elementwise) could approximate the predictions of the MLP-Mixer for a class.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
\n\n\n\nWe will call this matrix a stencil, and if we have access to it, we can find what parts of the input embeddings are responsible for a specific prediction.\n\nYou can think of it as a punch card with holes in specific positions. Only information in those positions will pass and contribute to a specific prediction. So we can measure the intensity of the input at those positions.\n\n\n\n\n
\n
\n\n\n\nOf course, this is an oversimplification, and there won't exist a unique stencil that perfectly represents all of the contributions of the input to a class (otherwise that would mean that the problem could be solved linearly). So this should be used for visualization purposes only, not as an accurate predictor.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Once we have a set of stencils for each class, we can effortlessly measure input contribution without relying on any external visualization techniques.\n\nTo find a stencil, we can start from a \"random noise\" stencil and optimize it to maximize the activations for a specific class by just back-propagating through the MLP-Mixer.\n\n\n\n\n
\n
\n\n\n\nBy doing this we can end up with many valid stencils, and we can reduce them to a few by using K-means to cluster them into similar stencils and averaging each cluster.\n\n# Using the Mixer to get the best of each world\n\nMLP-Mixer, used as an image classification model without convolutional layers, requires a lot of data, since the lack of inductive bias \u2013 one of the model's good points overall \u2013 is a weakness when it comes to working in low data domains.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "When used as a way to combine information previously extracted by large pretrained backbones (as opposed to being used as a full end-to-end solution), they shine. The Mixer\u2019s strength lies in finding temporal or structural coherence between different inputs. For example, in video-related tasks we could extract embeddings from the frames using a powerful, pretrained model that understands what is going on at frame level and use the mixer to make sense of it in a sequential manner.\n\nThis way of using the Mixer allows us to work with limited amounts of data and still get better results than what was achieved with Transformers. This is because Mixers seem to be more stable during training and seem to pay attention to all the inputs, while Transformers tend to collapse and pay attention only to some modalities/parts of the sequence.\n\nAcknowledgements: We would like to thank the Meta AI Research and Partner Engineering teams for this collaboration.", "metadata": {"source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'Practical Quantization in PyTorch'\nauthor: Suraj Subramanian, Mark Saroufim, Jerry Zhang\nfeatured-img: ''\n---\n\nQuantization is a cheap and easy way to make your DNN run faster and with lower memory requirements. PyTorch offers a few different approaches to quantize your model. In this blog post, we'll lay a (quick) foundation of quantization in deep learning, and then take a look at how each technique looks like in practice. Finally we'll end with recommendations from the literature for using quantization in your workflows.\n\n\n
\n
\n Fig 1. PyTorch <3 Quantization\n
\n\n**Contents**\n* TOC\n{:toc}\n## Fundamentals of Quantization\n\n> If someone asks you what time it is, you don't respond \"10:14:34:430705\", but you might say \"a quarter past 10\".\n\nQuantization has roots in information compression; in deep networks it refers to reducing the numerical precision of its weights and/or activations.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "Overparameterized DNNs have more degrees of freedom and this makes them good candidates for information compression [[1]]. When you quantize a model, two things generally happen - the model gets smaller and runs with better efficiency. Hardware vendors explicitly allow for faster processing of 8-bit data (than 32-bit data) resulting in higher throughput. A smaller model has lower memory footprint and power consumption [[2]], crucial for deployment at the edge.\n\n### Mapping function\nThe mapping function is what you might guess - a function that maps values from floating-point to integer space. A commonly used mapping function is a linear transformation given by
, where
is the input and
are **quantization parameters**.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "To reconvert to floating point space, the inverse function is given by
. \n\n
, and their difference constitutes the *quantization error*.\n\n### Quantization Parameters\nThe mapping function is parameterized by the **scaling factor**
and **zero-point**
. \n\n
is simply the ratio of the input range to the output range \n
", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "where [
] is the clipping range of the input, i.e. the boundaries of permissible inputs. [
] is the range in quantized output space that it is mapped to. For 8-bit quantization, the output range
.\n\n\n
acts as a bias to ensure that a 0 in the input space maps perfectly to a 0 in the quantized space.
", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "### Calibration\nThe process of choosing the input clipping range is known as **calibration**. The simplest technique (also the default in PyTorch) is to record the running mininmum and maximum values and assign them to
and
. [TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/pytorch-quantization-toolkit/docs/calib.html) also uses entropy minimization (KL divergence), mean-square-error minimization, or percentiles of the input range.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "In PyTorch, `Observer` modules ([docs](https://PyTorch.org/docs/stable/torch.quantization.html?highlight=observer#observers), [code](https://github.com/PyTorch/PyTorch/blob/748d9d24940cd17938df963456c90fa1a13f3932/torch/ao/quantization/observer.py#L88)) collect statistics on the input values and calculate the qparams
. Different calibration schemes result in different quantized outputs, and it's best to empirically verify which scheme works best for your application and architecture (more on that later).\n\n```python\nfrom torch.quantization.observer import MinMaxObserver, MovingAverageMinMaxObserver, HistogramObserver\nC, L = 3, 4\nnormal = torch.distributions.normal.Normal(0,1)\ninputs = [normal.sample((C, L)), normal.sample((C, L))]\nprint(inputs)\n\n# >>>>>\n# [tensor([[-0.0590, 1.1674, 0.7119, -1.1270],\n# [-1.3974, 0.5077, -0.5601, 0.0683],\n# [-0.0929, 0.9473, 0.7159, -0.4574]]]),", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "# tensor([[-0.0236, -0.7599, 1.0290, 0.8914],\n# [-1.1727, -1.2556, -0.2271, 0.9568],\n# [-0.2500, 1.4579, 1.4707, 0.4043]])]\n\nobservers = [MinMaxObserver(), MovingAverageMinMaxObserver(), HistogramObserver()]\nfor obs in observers:\n for x in inputs: obs(x) \n print(obs.__class__.__name__, obs.calculate_qparams())\n\n# >>>>>\n# MinMaxObserver (tensor([0.0112]), tensor([124], dtype=torch.int32))\n# MovingAverageMinMaxObserver (tensor([0.0101]), tensor([139], dtype=torch.int32))\n# HistogramObserver (tensor([0.0100]), tensor([106], dtype=torch.int32))\n```", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "### Affine and Symmetric Quantization Schemes\n**Affine or asymmetric quantization** schemes assign the input range to the min and max observed values. Affine schemes generally offer tighter clipping ranges and are useful for quantizing non-negative activations (you don't need the input range to contain negative values if your input tensors are never negative). The range is calculated as \n
. Affine quantization leads to more computationally expensive inference when used for weight tensors [[3]].\n\n**Symmetric quantization** schemes center the input range around 0, eliminating the need to calculate a zero-point offset. The range is calculated as \n
. For skewed signals (like non-negative activations) this can result in bad quantization resolution because the clipping range includes values that never show up in the input (see the pyplot below).", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "```python\nact = torch.distributions.pareto.Pareto(1, 10).sample((1,1024))\nweights = torch.distributions.normal.Normal(0, 0.12).sample((3, 64, 7, 7)).flatten()\n\ndef get_symmetric_range(x):\n beta = torch.max(x.max(), x.min().abs())\n return -beta.item(), beta.item()\n\ndef get_affine_range(x):\n return x.min().item(), x.max().item()\n\ndef plot(plt, data, scheme):\n boundaries = get_affine_range(data) if scheme == 'affine' else get_symmetric_range(data)\n a, _, _ = plt.hist(data, density=True, bins=100)\n ymin, ymax = np.quantile(a[a>0], [0.25, 0.95])\n plt.vlines(x=boundaries, ls='--', colors='purple', ymin=ymin, ymax=ymax)\n\nfig, axs = plt.subplots(2,2)\nplot(axs[0, 0], act, 'affine')\naxs[0, 0].set_title(\"Activation, Affine-Quantized\")\n\nplot(axs[0, 1], act, 'symmetric')\naxs[0, 1].set_title(\"Activation, Symmetric-Quantized\")\n\nplot(axs[1, 0], weights, 'affine')\naxs[1, 0].set_title(\"Weights, Affine-Quantized\")\n\nplot(axs[1, 1], weights, 'symmetric')\naxs[1, 1].set_title(\"Weights, Symmetric-Quantized\")\nplt.show()\n```", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
Fig 2. Clipping ranges (in purple) for affine and symmetric schemes\n
\n\n\nIn PyTorch, you can specify affine or symmetric schemes while initializing the Observer. Note that not all observers support both schemes.\n\n```python\nfor qscheme in [torch.per_tensor_affine, torch.per_tensor_symmetric]:\n obs = MovingAverageMinMaxObserver(qscheme=qscheme)\n for x in inputs: obs(x)\n print(f\"Qscheme: {qscheme} | {obs.calculate_qparams()}\")\n\n# >>>>>\n# Qscheme: torch.per_tensor_affine | (tensor([0.0101]), tensor([139], dtype=torch.int32))\n# Qscheme: torch.per_tensor_symmetric | (tensor([0.0109]), tensor([128]))\n```\n\n### Per-Tensor and Per-Channel Quantization Schemes\nQuantization parameters can be calculated for the layer's entire weight tensor as a whole, or separately for each channel. In per-tensor, the same clipping range is applied to all the channels in a layer", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
Fig 3. Per-Channel uses one set of qparams for each channel. Per-tensor uses the same qparams for the entire tensor.\n
\n\nFor weights quantization, symmetric-per-channel quantization provides better accuracies; per-tensor quantization performs poorly, possibly due to high variance in conv weights across channels from batchnorm folding [[3]].\n\n```python\nfrom torch.quantization.observer import MovingAveragePerChannelMinMaxObserver\nobs = MovingAveragePerChannelMinMaxObserver(ch_axis=0) # calculate qparams for all `C` channels separately\nfor x in inputs: obs(x)\nprint(obs.calculate_qparams())\n\n# >>>>>\n# (tensor([0.0090, 0.0075, 0.0055]), tensor([125, 187, 82], dtype=torch.int32))\n```", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "### Backend Engine\nCurrently, quantized operators run on x86 machines via the [FBGEMM backend](https://github.com/pytorch/FBGEMM), or use [QNNPACK](https://github.com/pytorch/QNNPACK) primitives on ARM machines. Backend support for server GPUs (via TensorRT and cuDNN) is coming soon. Learn more about extending quantization to custom backends: [RFC-0019](https://github.com/pytorch/rfcs/blob/master/RFC-0019-Extending-PyTorch-Quantization-to-Custom-Backends.md).\n\n```python\nbackend = 'fbgemm' if x86 else 'qnnpack'\nqconfig = torch.quantization.get_default_qconfig(backend) \ntorch.backends.quantized.engine = backend\n```\n\n\n### QConfig\n\nThe `QConfig` ([code](https://github.com/PyTorch/PyTorch/blob/d6b15bfcbdaff8eb73fa750ee47cef4ccee1cd92/torch/ao/quantization/qconfig.py#L165), [docs](https://pytorch.org/docs/stable/torch.quantization.html?highlight=qconfig#torch.quantization.QConfig)) NamedTuple stores the Observers and the quantization schemes used to quantize activations and weights.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "Be sure to pass the Observer class (not the instance), or a callable that can return Observer instances. Use `with_args()` to override the default arguments.\n\n```python\nmy_qconfig = torch.quantization.QConfig(\n activation=MovingAverageMinMaxObserver.with_args(qscheme=torch.per_tensor_affine),\n weight=MovingAveragePerChannelMinMaxObserver.with_args(qscheme=torch.qint8)\n)\n# >>>>>\n# QConfig(activation=functools.partial(, qscheme=torch.per_tensor_affine){}, weight=functools.partial(, qscheme=torch.qint8){})\n```\n\n\n## In PyTorch", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "## In PyTorch\n\nPyTorch allows you a few different ways to quantize your model depending on\n- if you prefer a flexible but manual, or a restricted automagic process (*Eager Mode* v/s *FX Graph Mode*)\n- if qparams for quantizing activations (layer outputs) are precomputed for all inputs, or calculated afresh with each input (*static* v/s *dynamic*),\n- if qparams are computed with or without retraining (*quantization-aware training* v/s *post-training quantization*)\n\nFX Graph Mode automatically fuses eligible modules, inserts Quant/DeQuant stubs, calibrates the model and returns a quantized module - all in two method calls - but only for networks that are [symbolic traceable](https://PyTorch.org/docs/stable/fx.html#torch.fx.symbolic_trace). The examples below contain the calls using Eager Mode and FX Graph Mode for comparison.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "In DNNs, eligible candidates for quantization are the FP32 weights (layer parameters) and activations (layer outputs). Quantizing weights reduces the model size. Quantized activations typically result in faster inference.\n\nAs an example, the 50-layer ResNet network has ~26 million weight parameters and computes ~16 million activations in the forward pass.\n\n### Post-Training Dynamic/Weight-only Quantization \nHere the model's weights are pre-quantized; the activations are quantized on-the-fly (\"dynamic\") during inference. The simplest of all approaches, it has a one line API call in `torch.quantization.quantize_dynamic`. Currently only Linear and Recurrent (`LSTM`, `GRU`, `RNN`) layers are supported for dynamic quantization.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "**(+)** Can result in higher accuracies since the clipping range is exactly calibrated for each input [[1]].\n \n **(+)** Dynamic quantization is preferred for models like LSTMs and Transformers where writing/retrieving the model's weights from memory dominate bandwidths [[4]]. \n \n **(-)** Calibrating and quantizing the activations at each layer during runtime can add to the compute overhead. \n\n```python\nimport torch\nfrom torch import nn\n\n# toy model\nm = nn.Sequential(\n nn.Conv2d(2, 64, (8,)),\n nn.ReLU(),\n nn.Linear(16,10),\n nn.LSTM(10, 10))\n\nm.eval()\n\n## EAGER MODE\nfrom torch.quantization import quantize_dynamic\nmodel_quantized = quantize_dynamic(\n model=m, qconfig_spec={nn.LSTM, nn.Linear}, dtype=torch.qint8, inplace=False\n)", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "## FX MODE\nfrom torch.quantization import quantize_fx\nqconfig_dict = {\"\": torch.quantization.default_dynamic_qconfig} # An empty key denotes the default applied to all modules\nmodel_prepared = quantize_fx.prepare_fx(m, qconfig_dict)\nmodel_quantized = quantize_fx.convert_fx(model_prepared)\n```\n\n### Post-Training Static Quantization (PTQ)\nPTQ also pre-quantizes model weights but instead of calibrating activations on-the-fly, the clipping range is pre-calibrated and fixed (\"static\") using validation data. Activations stay in quantized precision between operations during inference. About 100 mini-batches of representative data are sufficient to calibrate the observers [[2]]. The examples below use random data in calibration for convenience - using that in your application will result in bad qparams.\n\n\n\n
\n
\n Fig 4. Steps in Post-Training Static Quantization\n
", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "[Module fusion](https://pytorch.org/tutorials/recipes/fuse.html) combines multiple sequential modules (eg: `[Conv2d, BatchNorm, ReLU]`) into one. Fusing modules means the compiler needs to only run one kernel instead of many; this speeds things up and improves accuracy by reducing quantization error.\n\n**(+)** Static quantization has faster inference than dynamic quantization because it eliminates the float<->int conversion costs between layers. \n\n**(-)** Static quantized models may need regular re-calibration to stay robust against distribution-drift.\n\n\n```python\n# Static quantization of a model consists of the following steps:\n\n# Fuse modules\n# Insert Quant/DeQuant Stubs\n# Prepare the fused module (insert observers before and after layers)\n# Calibrate the prepared module (pass it representative data)\n# Convert the calibrated module (replace with quantized version)\n\nimport torch\nfrom torch import nn\nimport copy\n\nbackend = \"fbgemm\" # running on a x86 CPU. Use \"qnnpack\" if running on ARM.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "model = nn.Sequential(\n nn.Conv2d(2,64,3),\n nn.ReLU(),\n nn.Conv2d(64, 128, 3),\n nn.ReLU()\n)\n\n## EAGER MODE\nm = copy.deepcopy(model)\nm.eval()\n\"\"\"Fuse\n- Inplace fusion replaces the first module in the sequence with the fused module, and the rest with identity modules\n\"\"\"\ntorch.quantization.fuse_modules(m, ['0','1'], inplace=True) # fuse first Conv-ReLU pair\ntorch.quantization.fuse_modules(m, ['2','3'], inplace=True) # fuse second Conv-ReLU pair\n\n\"\"\"Insert stubs\"\"\"\nm = nn.Sequential(torch.quantization.QuantStub(), \n *m, \n torch.quantization.DeQuantStub())\n\n\"\"\"Prepare\"\"\"\nm.qconfig = torch.quantization.get_default_qconfig(backend)\ntorch.quantization.prepare(m, inplace=True)\n\n\"\"\"Calibrate\n- This example uses random data for convenience. Use representative (validation) data instead.\n\"\"\"\nwith torch.inference_mode():\n for _ in range(10):\n x = torch.rand(1,2, 28, 28)\n m(x)\n \n\"\"\"Convert\"\"\"\ntorch.quantization.convert(m, inplace=True)", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "\"\"\"Check\"\"\"\nprint(m[[1]].weight().element_size()) # 1 byte instead of 4 bytes for FP32\n\n\n## FX GRAPH\nfrom torch.quantization import quantize_fx\nm = copy.deepcopy(model)\nm.eval()\nqconfig_dict = {\"\": torch.quantization.get_default_qconfig(backend)}\n# Prepare\nmodel_prepared = quantize_fx.prepare_fx(m, qconfig_dict)\n# Calibrate - Use representative (validation) data.\nwith torch.inference_mode():\n for _ in range(10):\n x = torch.rand(1,2,28, 28)\n model_prepared(x)\n# quantize\nmodel_quantized = quantize_fx.convert_fx(model_prepared)\n```\n\n### Quantization-aware Training (QAT)\n\n
\n
\n Fig 5. Steps in Quantization-Aware Training\n
", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "The PTQ approach is great for large models, but accuracy suffers in smaller models [[6]]. This is of course due to the loss in numerical precision when adapting a model from FP32 to the INT8 realm *(Figure 6(a))*. QAT tackles this by including this quantization error in the training loss, thereby training an INT8-first model.\n\n\n
\n
\n Fig 6. Comparison of PTQ and QAT convergence [3]\n
", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "All weights and biases are stored in FP32, and backpropagation happens as usual. However in the forward pass, quantization is internally simulated via `FakeQuantize` modules. They are called fake because they quantize and immediately dequantize the data, adding quantization noise similar to what might be encountered during quantized inference. The final loss thus accounts for any expected quantization errors. Optimizing on this allows the model to identify a wider region in the loss function *(Figure 6(b))*, and identify FP32 parameters such that quantizing them to INT8 does not significantly affect accuracy.\n\n\n
\n
Fig 7. Fake Quantization in the forward and backward pass \n
Image source: https://developer.nvidia.com/blog/achieving-fp32-accuracy-for-int8-inference-using-quantization-aware-training-with-tensorrt\n
", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "**(+)** QAT yields higher accuracies than PTQ.\n\n**(+)** Qparams can be learned during model training for more fine-grained accuracy (see [LearnableFakeQuantize](https://github.com/pytorch/pytorch/blob/master/torch/ao/quantization/_learnable_fake_quantize.py))\n\n**(-)** Computational cost of retraining a model in QAT can be several hundred epochs [[1]]\n\n\n```python\n# QAT follows the same steps as PTQ, with the exception of the training loop before you actually convert the model to its quantized version\n\nimport torch\nfrom torch import nn\n\nbackend = \"fbgemm\" # running on a x86 CPU. Use \"qnnpack\" if running on ARM.\n\nm = nn.Sequential(\n nn.Conv2d(2,64,8),\n nn.ReLU(),\n nn.Conv2d(64, 128, 8),\n nn.ReLU()\n)\n\n\"\"\"Fuse\"\"\"\ntorch.quantization.fuse_modules(m, ['0','1'], inplace=True) # fuse first Conv-ReLU pair\ntorch.quantization.fuse_modules(m, ['2','3'], inplace=True) # fuse second Conv-ReLU pair", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "\"\"\"Insert stubs\"\"\"\nm = nn.Sequential(torch.quantization.QuantStub(), \n *m, \n torch.quantization.DeQuantStub())\n\n\"\"\"Prepare\"\"\"\nm.train()\nm.qconfig = torch.quantization.get_default_qconfig(backend)\ntorch.quantization.prepare_qat(m, inplace=True)\n\n\"\"\"Training Loop\"\"\"\nn_epochs = 10\nopt = torch.optim.SGD(m.parameters(), lr=0.1)\nloss_fn = lambda out, tgt: torch.pow(tgt-out, 2).mean()\nfor epoch in range(n_epochs):\n x = torch.rand(10,2,24,24)\n out = m(x)\n loss = loss_fn(out, torch.rand_like(out))\n opt.zero_grad()\n loss.backward()\n opt.step()\n\n\"\"\"Convert\"\"\"\nm.eval()\ntorch.quantization.convert(m, inplace=True)\n```", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "## Sensitivity Analysis\nNot all layers respond to quantization equally, some are more sensitive to precision drops than others. Identifying the optimal combination of layers that minimizes accuracy drop is time-consuming, so [[3]] suggest a one-at-a-time sensitivity analysis to identify which layers are most sensitive, and retaining FP32 precision on those. In their experiments, skipping just 2 conv layers (out of a total 28 in MobileNet v1) give them near-FP32 accuracy. Using FX Graph Mode, we can create custom qconfigs to do this easily:\n\n```python\n# ONE-AT-A-TIME SENSITIVITY ANALYSIS \n\nfor quantized_layer, _ in model.named_modules():\n print(\"Only quantizing layer: \", quantized_layer)\n\n # The module_name key allows module-specific qconfigs. \n qconfig_dict = {\"\": None, \n \"module_name\":[(quantized_layer, torch.quantization.get_default_qconfig(backend))]}", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "model_prepared = quantize_fx.prepare_fx(model, qconfig_dict)\n # calibrate\n model_quantized = quantize_fx.convert_fx(model_prepared)\n # evaluate(model)\n```\n\nAnother approach is to compare statistics of the FP32 and INT8 layers; commonly used metrics for these are SQNR (Signal to Quantized Noise Ratio) and Mean-Squre-Error. Such a comparative analysis may also help in guiding further optimizations. \n\n\n
\n
\n Fig 8. Comparing model weights and activations\n
\n\nPyTorch provides tools to help with this analysis under the Numeric Suite. Learn more about using Numeric Suite from the [full tutorial](https://pytorch.org/tutorials/prototype/numeric_suite_tutorial.html).\n\n```python\n# extract from https://pytorch.org/tutorials/prototype/numeric_suite_tutorial.html\nimport torch.quantization._numeric_suite as ns", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "def SQNR(x, y):\n # Higher is better\n Ps = torch.norm(x)\n Pn = torch.norm(x-y)\n return 20*torch.log10(Ps/Pn)\n\nwt_compare_dict = ns.compare_weights(fp32_model.state_dict(), int8_model.state_dict())\nfor key in wt_compare_dict:\n print(key, compute_error(wt_compare_dict[key]['float'], wt_compare_dict[key]['quantized'].dequantize()))\n\nact_compare_dict = ns.compare_model_outputs(fp32_model, int8_model, input_data)\nfor key in act_compare_dict:\n print(key, compute_error(act_compare_dict[key]['float'][0], act_compare_dict[key]['quantized'][0].dequantize()))\n\n```\n\n\n## Recommendations for your workflow\n\n
\n
\n Fig 9. Suggested quantization workflow\n
\n Click for larger image ", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "### Points to note\n - Large (10M+ parameters) models are more robust to quantization error. [[2]]\n - Quantizing a model from a FP32 checkpoint provides better accuracy than training an INT8 model from scratch.[[2]]\n - Profiling the model runtime is optional but it can help identify layers that bottleneck inference.\n - Dynamic Quantization is an easy first step, especially if your model has many Linear or Recurrent layers.\n - Use symmetric-per-channel quantization with `MinMax` observers for quantizing weights. Use affine-per-tensor quantization with `MovingAverageMinMax` observers for quantizing activations[[2], [3]]\n - Use metrics like SQNR to identify which layers are most suscpetible to quantization error. Turn off quantization on these layers.\n - Use QAT to fine-tune for around 10% of the original training schedule with an annealing learning rate schedule starting at 1% of the initial training learning rate. [[3]]\n - If the above workflow didn't work for you, we want to know more. Post a thread with details of your code (model architecture, accuracy metric, techniques tried). Feel free to cc me [@suraj.pt](https://discuss.pytorch.org/u/suraj.pt/).", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "That was a lot to digest, congratulations for sticking with it! Next, we'll take a look at quantizing a \"real-world\" model that uses dynamic control structures (if-else, loops). These elements disallow symbolic tracing a model, which makes it a bit tricky to directly quantize the model out of the box. In the next post of this series, we'll get our hands dirty on a model that is chock full of loops and if-else blocks, and even uses third-party libraries in the `forward` call. \n\nWe'll also cover a cool new feature in PyTorch Quantization called Define-by-Run, that tries to ease this constraint by needing only subsets of the model's computational graph to be free of dynamic flow. Check out the [Define-by-Run poster at PTDD'21](https://s3.amazonaws.com/assets.pytorch.org/ptdd2021/posters/C8.png) for a preview.", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "## References\n[[1]] Gholami, A., Kim, S., Dong, Z., Yao, Z., Mahoney, M. W., & Keutzer, K. (2021). A survey of quantization methods for efficient neural network inference. arXiv preprint arXiv:2103.13630.\n\n[[2]] Krishnamoorthi, R. (2018). Quantizing deep convolutional networks for efficient inference: A whitepaper. arXiv preprint arXiv:1806.08342.\n\n[[3]] Wu, H., Judd, P., Zhang, X., Isaev, M., & Micikevicius, P. (2020). Integer quantization for deep learning inference: Principles and empirical evaluation. arXiv preprint arXiv:2004.09602.\n\n[[4]] PyTorch Quantization Docs\n\n\n[1]: https://arxiv.org/pdf/2103.13630.pdf\n[2]: https://arxiv.org/pdf/1806.08342.pdf\n[3]: https://arxiv.org/abs/2004.09602\n[4]: https://pytorch.org/docs/stable/quantization.html#prototype-fx-graph-mode-quantization", "metadata": {"source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'Towards Reproducible Research with PyTorch Hub'\nauthor: Team PyTorch\nredirect_from: /2019/06/10/pytorch_hub.html\n---\n\nReproducibility is an essential requirement for many fields of research including those based on machine learning techniques. However, many machine learning publications are either not reproducible or are difficult to reproduce. With the continued growth in the number of research publications, including tens of thousands of papers now hosted on arXiv and submissions to conferences at an all time high, research reproducibility is more important than ever. While many of these publications are accompanied by code as well as trained models which is helpful but still leaves a number of steps for users to figure out for themselves.", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
+{"page_content": "We are excited to announce the availability of PyTorch Hub, a simple API and workflow that provides the basic building blocks for improving machine learning research reproducibility. PyTorch Hub consists of a pre-trained model repository designed specifically to facilitate research reproducibility and enable new research. It also has built-in support for [Colab](https://colab.research.google.com/), integration with [*Papers With Code*](https://paperswithcode.com/) and currently contains a broad set of models that include Classification and Segmentation, Generative, Transformers, etc.\n\n\n

\n
\n\n## [Owner] Publishing models", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
+{"page_content": "PyTorch Hub supports the publication of pre-trained models (model definitions and pre-trained weights) to a GitHub repository by adding a simple ```hubconf.py``` file.\nThis provides an enumeration of which models are to be supported and a list of dependencies needed to run the models.\nExamples can be found in the [torchvision](https://github.com/pytorch/vision/blob/master/hubconf.py), [huggingface-bert](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/hubconf.py) and [gan-model-zoo](https://github.com/facebookresearch/pytorch_GAN_zoo) repositories.\n\nLet us look at the simplest case: `torchvision`'s `hubconf.py`:\n\n```python\n# Optional list of dependencies required by the package\ndependencies = ['torch']", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
+{"page_content": "from torchvision.models.alexnet import alexnet\nfrom torchvision.models.densenet import densenet121, densenet169, densenet201, densenet161\nfrom torchvision.models.inception import inception_v3\nfrom torchvision.models.resnet import resnet18, resnet34, resnet50, resnet101, resnet152,\\\nresnext50_32x4d, resnext101_32x8d\nfrom torchvision.models.squeezenet import squeezenet1_0, squeezenet1_1\nfrom torchvision.models.vgg import vgg11, vgg13, vgg16, vgg19, vgg11_bn, vgg13_bn, vgg16_bn, vgg19_bn\nfrom torchvision.models.segmentation import fcn_resnet101, deeplabv3_resnet101\nfrom torchvision.models.googlenet import googlenet\nfrom torchvision.models.shufflenetv2 import shufflenet_v2_x0_5, shufflenet_v2_x1_0\nfrom torchvision.models.mobilenet import mobilenet_v2\n```", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
+{"page_content": "In `torchvision`, the models have the following properties:\n- Each model file can function and be executed independently\n- They dont require any package other than PyTorch (encoded in `hubconf.py` as `dependencies['torch']`)\n- They dont need separate entry-points, because the models when created, work seamlessly out of the box\n\nMinimizing package dependencies reduces the friction for users to load your model for immediate experimentation.\n\nA more involved example is HuggingFace's BERT models. Here is their `hubconf.py`\n\n```python\ndependencies = ['torch', 'tqdm', 'boto3', 'requests', 'regex']\n\nfrom hubconfs.bert_hubconf import (\n bertTokenizer,\n bertModel,\n bertForNextSentencePrediction,\n bertForPreTraining,\n bertForMaskedLM,\n bertForSequenceClassification,\n bertForMultipleChoice,\n bertForQuestionAnswering,\n bertForTokenClassification\n)\n```", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
+{"page_content": "Each model then requires an entrypoint to be created. Here is a code snippet to specify an entrypoint of the ```bertForMaskedLM``` model, which returns the pre-trained model weights.\n\n```python\ndef bertForMaskedLM(*args, **kwargs):\n \"\"\"\n BertForMaskedLM includes the BertModel Transformer followed by the\n pre-trained masked language modeling head.\n Example:\n ...\n \"\"\"\n model = BertForMaskedLM.from_pretrained(*args, **kwargs)\n return model\n```\n\nThese entry-points can serve as wrappers around complex model factories. They can give a clean and consistent help docstring, have logic to support downloading of pretrained weights (for example via `pretrained=True`) or have additional hub-specific functionality such as visualization.", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
+{"page_content": "With a `hubconf.py` in place, you can send a pull request based on the template [here](https://github.com/pytorch/hub/blob/master/docs/template.md).\nOur goal is to curate high-quality, easily-reproducible, maximally-beneficial models for research reproducibility.\nHence, we may work with you to refine your pull request and in some cases reject some low-quality models to be published.\nOnce we accept your pull request, your model will soon appear on [Pytorch hub webpage](https://pytorch.org/hub) for all users to explore.\n\n\n## [User] Workflow\n\nAs a user, PyTorch Hub allows you to follow a few simple steps and do things like: 1) explore available models; 2) load a model; and 3) understand what methods are available for any given model. Let's walk through some examples of each.\n\n### Explore available entrypoints.\n\nUsers can list all available entrypoints in a repo using the ```torch.hub.list()``` API.", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
+{"page_content": "```python\n>>> torch.hub.list('pytorch/vision')\n>>>\n['alexnet',\n'deeplabv3_resnet101',\n'densenet121',\n...\n'vgg16',\n'vgg16_bn',\n'vgg19',\n 'vgg19_bn']\n ```\n\nNote that PyTorch Hub also allows auxillary entrypoints (other than pretrained models), e.g. ```bertTokenizer``` for preprocessing in the [BERT](https://pytorch.org/hub/huggingface_pytorch-pretrained-bert_bert/) models, to make the user workflow smoother.\n\n\n### Load a model\n\nNow that we know which models are available in the Hub, users can load a model entrypoint using the ```torch.hub.load()``` API. This only requires a single command without the need to install a wheel. In addition the ```torch.hub.help()``` API can provide useful information about how to instantiate the model.\n\n```python\nprint(torch.hub.help('pytorch/vision', 'deeplabv3_resnet101'))\nmodel = torch.hub.load('pytorch/vision', 'deeplabv3_resnet101', pretrained=True)\n```", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
+{"page_content": "It is also common that repo owners will want to continually add bug fixes or performance improvements. PyTorch Hub makes it super simple for users to get the latest update by calling:\n\n```python\nmodel = torch.hub.load(..., force_reload=True)\n```\n\nWe believe this will help to alleviate the burden of repetitive package releases by repo owners and instead allow them to focus more on their research.\nIt also ensures that, as a user, you are getting the freshest available models.\n\nOn the contrary, stability is important for users. Hence, some model owners serve them from a specificed branch or tag, rather than the `master` branch, to ensure stability of the code.\nFor example, `pytorch_GAN_zoo` serves them from the `hub` branch:\n\n```python\nmodel = torch.hub.load('facebookresearch/pytorch_GAN_zoo:hub', 'DCGAN', pretrained=True, useGPU=False)\n```", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
+{"page_content": "Note that the ```*args```, ```**kwargs``` passed to `hub.load()` are used to *instantiate* a model. In the above example, `pretrained=True` and `useGPU=False` are given to the model's entrypoint.\n\n\n### Explore a loaded model\n\nOnce you have a model from PyTorch Hub loaded, you can use the following workflow to find out the available methods that are supported as well as understand better what arguments are requires to run it.\n\n\n```dir(model)``` to see all available methods of the model. Let's take a look at `bertForMaskedLM`'s available methods.\n\n```python\n>>> dir(model)\n>>>\n['forward'\n...\n'to'\n'state_dict',\n]\n```\n\n```help(model.forward)``` provides a view into what arguments are required to make your loaded model run\n\n```python\n>>> help(model.forward)\n>>>\nHelp on method forward in module pytorch_pretrained_bert.modeling:\nforward(input_ids, token_type_ids=None, attention_mask=None, masked_lm_labels=None)\n...\n```", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
+{"page_content": "Have a closer look at the [BERT](https://pytorch.org/hub/huggingface_pytorch-pretrained-bert_bert/) and [DeepLabV3](https://pytorch.org/hub/pytorch_vision_deeplabv3_resnet101/) pages, where you can see how these models can be used once loaded.\n\n### Other ways to explore\n\nModels available in PyTorch Hub also support both [Colab](https://colab.research.google.com/github/pytorch/pytorch.github.io/blob/master/assets/hub/facebookresearch_pytorch-gan-zoo_pgan.ipynb) and are directly linked on [Papers With Code](https://paperswithcode.com/) and you can get started with a single click. [Here](https://paperswithcode.com/paper/densely-connected-convolutional-networks) is a good example to get started with (shown below).\n\n\n

\n
\n\n## Additional resources:", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
+{"page_content": "* PyTorch Hub API documentation can be found [here](https://pytorch.org/docs/stable/hub.html).\n* Submit a model [here](https://github.com/pytorch/hub) for publication in PyTorch Hub.\n* Go to [https://pytorch.org/hub](https://pytorch.org/hub) to learn more about the available models.\n* Look for more models to come on [paperswithcode.com](https://paperswithcode.com/).\n\n\nA BIG thanks to the folks at HuggingFace, the PapersWithCode team, fast.ai and Nvidia as well as Morgane Riviere (FAIR Paris) and lots of others for helping bootstrap this effort!!\n\nCheers!\n\nTeam PyTorch\n\n\n\n\n## FAQ:\n\n**Q: If we would like to contribute a model that is already in the Hub but perhaps mine has better accuracy, should I still contribute?**\n\n\nA: Yes!! A next step for Hub is to implement an upvote/downvote system to surface the best models.\n\n**Q: Who hosts the model weights for PyTorch Hub?**", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
+{"page_content": "A: You, as the contributor, are responsible to host the model weights. You can host your model in your favorite cloud storage or, if it fits within the limits, on GitHub. If it is not within your means to host the weights, check with us via opening an issue on the hub repository.\n\n**Q: What if my model is trained on private data? Should I still contribute this model?**\n\n\nA: No! PyTorch Hub is centered around open research and that extends to the usage of open datasets to train these models on. If a pull request for a proprietary model is submitted, we will kindly ask that you resubmit a model trained on something open and available.\n\n**Q: Where are my downloaded models saved?**\n\n\nA: We follow the [XDG Base Directory Specification](https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html) and adhere to common standards around cached files and directories.\n\nThe locations are used in the order of:", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
+{"page_content": "* Calling ```hub.set_dir()```\n* ```$TORCH_HOME/hub```, if environment variable ```TORCH_HOME``` is set.\n* ```$XDG_CACHE_HOME/torch/hub```, if environment variable ```XDG_CACHE_HOME``` is set.\n* ```~/.cache/torch/hub```", "metadata": {"source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'The road to 1.0: production ready PyTorch'\nauthor: The PyTorch Team\nredirect_from: /2018/05/02/road-to-1.0.html\n---\n\nWe would like to give you a preview of the roadmap for PyTorch 1.0 , the next release of PyTorch. Over the last year, we've had 0.2, 0.3 and 0.4 transform PyTorch from a [Torch+Chainer]-like interface into something cleaner, adding double-backwards, numpy-like functions, advanced indexing and removing Variable boilerplate. At this time, we're confident that the API is in a reasonable and stable state to confidently release a 1.0.\n\nHowever, 1.0 isn't just about stability of the interface.\n\nOne of PyTorch's biggest strengths is its first-class Python integration, imperative style, simplicity of the API and options. These are aspects that make PyTorch good for research and hackability.\n\nOne of its biggest downsides has been production-support. What we mean by production-support is the countless things one has to do to models to run them efficiently at massive scale:", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
+{"page_content": "- exporting to C++-only runtimes for use in larger projects\n- optimizing mobile systems on iPhone, Android, Qualcomm and other systems\n- using more efficient data layouts and performing kernel fusion to do faster inference (saving 10% of speed or memory at scale is a big win)\n- quantized inference (such as 8-bit inference)\n\nStartups, large companies and anyone who wants to build a product around PyTorch have asked for production support. At Facebook (the largest stakeholder for PyTorch) we have Caffe2, which has been the production-ready platform, running in our datacenters and shipping to more than 1 billion phones spanning eight generations of iPhones and six generations of Android CPU architectures. It has server-optimized inference on Intel / ARM, TensorRT support, and all the necessary bits for production. Considering all this value locked-in to a platform that the PyTorch team works quite closely with, **we decided to marry PyTorch and Caffe2 which gives the production-level readiness for PyTorch**.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
+{"page_content": "Supporting production features without adding usability issues for our researchers and end-users needs creative solutions.\n\n## Production != Pain for researchers\n\nAdding production capabilities involves increasing the API complexity and number of configurable options for models. One configures memory-layouts (NCHW vs NHWC vs N,C/32,H,W,32, each providing different performance characteristics), quantization (8-bit? 3-bit?), fusion of low-level kernels (you used a Conv + BatchNorm + ReLU, let's fuse them into a single kernel), separate backend options (MKLDNN backend for a few layers and NNPACK backend for other layers), etc.\n\nPyTorch's central goal is to provide a great platform for research and hackability. So, while we add all these optimizations, we've been working with a hard design constraint to never trade these off against usability.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
+{"page_content": "To pull this off, we are introducing `torch.jit`, a just-in-time (JIT) compiler that at runtime takes your PyTorch models and rewrites them to run at production-efficiency. The JIT compiler can also export your model to run in a C++-only runtime based on Caffe2 bits.\n\n> In 1.0, your code continues to work as-is, we're not making any big changes to the existing API.\n\nMaking your model production-ready is an opt-in annotation, which uses the `torch.jit` compiler to export your model to a Python-less environment, and improving its performance. Let's walk through the JIT compiler in detail.\n\n## `torch.jit`: A JIT-compiler for your models", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
+{"page_content": "We strongly believe that it's hard to match the productivity you get from specifying your models directly as idiomatic Python code. This is what makes PyTorch so flexible, but it also means that PyTorch pretty much never knows the operation you'll run next. This however is a big blocker for export/productionization and heavyweight automatic performance optimizations because they need full upfront knowledge of how the computation will look before it even gets executed.\n\nWe provide two opt-in ways of recovering this information from your code, one based on tracing native python code and one based on compiling a subset of the python language annotated into a python-free intermediate representation. After thorough discussions we concluded that they're both going to be useful in different contexts, and as such you will be able to mix and match them freely.\n\n## Tracing Mode", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
+{"page_content": "## Tracing Mode\n\nThe PyTorch tracer, `torch.jit.trace`, is a function that records all the native PyTorch operations performed in a code region, along with the data dependencies between them. In fact, PyTorch has had a tracer since 0.3, which has been used for exporting models through ONNX. What changes now, is that you no longer necessarily need to take the trace and run it elsewhere - PyTorch can re-execute it for you, using a carefully designed high-performance C++ runtime. As we develop PyTorch 1.0 this runtime will integrate all the optimizations and hardware integrations that Caffe2 provides.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
+{"page_content": "The biggest benefit of this approach is that it doesn't really care how your Python code is structured \u2014 you can trace through generators or coroutines, modules or pure functions. Since we only record native PyTorch operators, these details have no effect on the trace recorded. This behavior, however, is a double-edged sword. For example, if you have a loop in your model, it will get unrolled in the trace, inserting a copy of the loop body for as many times as the loop ran. This opens up opportunities for zero-cost abstraction (e.g. you can loop over modules, and the actual trace will be loop-overhead free!), but on the other hand this will also affect data dependent loops (think of e.g. processing sequences of varying lengths), effectively hard-coding a single length into the trace.\n\nFor networks that do not contain loops and if statements, tracing is non-invasive and is robust enough to handle a wide variety of coding styles. This code example illustrates what tracing looks like:", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
+{"page_content": "```python\n# This will run your nn.Module or regular Python function with the example\n# input that you provided. The returned callable can be used to re-execute\n# all operations that happened during the example run, but it will no longer\n# use the Python interpreter.\nfrom torch.jit import trace\ntraced_model = trace(model, example_input=input)\ntraced_fn = trace(fn, example_input=input)\n\n# The training loop doesn't change. Traced model behaves exactly like an\n# nn.Module, except that you can't edit what it does or change its attributes.\n# Think of it as a \"frozen module\".\nfor input, target in data_loader:\n loss = loss_fn(traced_model(input), target)\n```\n\n## Script Mode\n\nTracing mode is a great way to minimize the impact on your code, but we're also very excited about the models that fundamentally make use of control flow such as RNNs. Our solution to this is a scripting mode.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
+{"page_content": "In this case you write out a regular Python function, except that you can no longer use certain more complicated language features. Once you isolated the desired functionality, you let us know that you'd like the function to get compiled by decorating it with an `@script` decorator. This annotation will transform your python function directly into our high-performance C++ runtime. This lets us recover all the PyTorch operations along with loops and conditionals. They will be embedded into our internal representation of this function, and will be accounted for every time this function is run.\n\n```python\nfrom torch.jit import script\n\n@script\ndef rnn_loop(x):\n hidden = None\n for x_t in x.split(1):\n x, hidden = model(x, hidden)\n return x\n```\n\n## Optimization and Export\n\nRegardless of whether you use tracing or `@script`, the result is a python-free representation of your model, which can be used to optimize the model or to export the model from python for use in production environments.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
+{"page_content": "Extracting bigger segments of the model into an intermediate representation makes it possible to do sophisticated whole-program optimizations and to offload computation to specialized AI accelerators which operate on graphs of computation. We have already been developing the beginnings of these optimizations, including passes that fuse GPU operations together to improve the performance of smaller RNN models.\n\nIt also allows us to use existing high-performance backends available in Caffe2 today to run the model efficiently. Additionally, @script functions (and modules!) can be fully exported to ONNX in a way that retains their dynamic nature, such that you can easily run them in a Python-free environment using the model executors from Caffe2 or by transferring the model to any other framework supporting ONNX.\n\n## Usability", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
+{"page_content": "## Usability\n\n**We care deeply about maintaining our current level of usability and we know that execution of the code not directly in Python leads to harder debugging, but this is something that we think about a lot, and we're making sure that you're not getting locked in to a completely different programming language.**", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
+{"page_content": "First, we follow the principle of pay for what you use \u2014 if you don't need to optimize or export your model, you do not have to use these new features and won't see any downsides. Furthermore, use of traced or @script modules/functions can be done incrementally. For instance, all of these behaviors are allowed: You can trace part of your model and use the trace in a larger non-traced model. You can use tracing for 90% of your model, and use @script for the one sub-module that actually has some control flow in it. You can write a function using @script and have it call a native python function. If something appears incorrect in an @script function, you can remove the annotation and the code will execute in native python where it is easy to debug using your favorite tools and methods. Think of tracing and @script like type annotations using MyPy or TypeScript \u2014 each additional annotation can be tested incrementally, and none are required until you want to optimize or productionize.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
+{"page_content": "Most importantly, these modes will be built into the core of PyTorch so that mixing and matching them with your existing code can happen seamlessly.\n\n_Note: The name JIT for these components is a bit of a misnomer and comes from historical reasons. The tracing/function execution in PyTorch started out as an optimizing JIT compiler that generated fused CUDA kernels but then grew to encompass optimization, @script, and export. When it is ready for release we will likely rename this functionality to the hybrid frontend, but we wanted to present it here as it is named in the code so that you can follow along as we develop it._\n\n## Other changes and improvements\n\nProduction support is the big feature for 1.0, but we will continue optimizing and fixing other parts of PyTorch as course of the standard release process.", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
+{"page_content": "On the backend side of things, PyTorch will see some changes, which might affect user-written C and C++ extensions. We are replacing (or refactoring) the backend ATen library to incorporate features and optimizations from Caffe2.\n\n## Last Words\n\nWe aim to release 1.0 some time during the summer. You can follow-along our progress on the [Pull Requests](https://github.com/pytorch/pytorch/pulls) page.\n\nYou can read this from the perspective of the Caffe2 project at: [https://caffe2.ai/blog/2018/05/02/Caffe2_PyTorch_1_0.html](https://caffe2.ai/blog/2018/05/02/Caffe2_PyTorch_1_0.html)", "metadata": {"source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch adds new tools and libraries, welcomes Preferred Networks to its community'\nauthor: Team PyTorch\n---\n\nPyTorch continues to be used for the latest state-of-the-art research on display at the NeurIPS conference next week, making up nearly [70% of papers](https://chillee.github.io/pytorch-vs-tensorflow/) that cite a framework. In addition, we\u2019re excited to welcome Preferred Networks, the maintainers of the Chainer framework, to the PyTorch community. Their teams are moving fully over to PyTorch for developing their ML capabilities and services.\n\nThis growth underpins PyTorch\u2019s focus on building for the needs of the research community, and increasingly, supporting the full workflow from research to production deployment. To further support researchers and developers, we\u2019re launching a number of new tools and libraries for large scale computer vision and elastic fault tolerant training. Learn more on GitHub and at our NeurIPS booth.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
+{"page_content": "## Preferred Networks joins the PyTorch community\n\nPreferred Networks, Inc. (PFN) announced plans to move its deep learning framework from Chainer to PyTorch. As part of this change, PFN will collaborate with the PyTorch community and contributors, including people from Facebook, Microsoft, CMU, and NYU, to participate in the development of PyTorch.\n\nPFN developed Chainer, a deep learning framework that introduced the concept of define-by-run (also referred to as eager execution), to support and speed up its deep learning development. Chainer has been used at PFN since 2015 to rapidly solve real-world problems with the latest, cutting-edge technology. Chainer was also one of the inspirations for PyTorch\u2019s initial design, as outlined in the [PyTorch NeurIPS](https://papers.nips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library) paper.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
+{"page_content": "PFN has driven innovative work with [CuPy](https://cupy.chainer.org/), ImageNet in 15 minutes, [Optuna](https://optuna.org/), and other projects that have pushed the boundaries of design and engineering. As part of the PyTorch community, PFN brings with them creative engineering capabilities and experience to help take the framework forward. In addition, PFN\u2019s migration to PyTorch will allow it to efficiently incorporate the latest research results to accelerate its R&D activities, [given PyTorch\u2019s broad adoption with researchers](https://thegradient.pub/state-of-ml-frameworks-2019-pytorch-dominates-research-tensorflow-dominates-industry/), and to collaborate with the community to add support for PyTorch on MN-Core, a deep learning processor currently in development.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
+{"page_content": "We are excited to welcome PFN to the PyTorch community, and to jointly work towards the common goal of furthering advances in deep learning technology. Learn more about the PFN\u2019s migration to PyTorch [here](https://preferred.jp/en/news/pr20191205/).\n\n## Tools for elastic training and large scale computer vision\n\n### PyTorch Elastic (Experimental)\n\nLarge scale model training is becoming commonplace with architectures like BERT and the growth of model parameter counts into the billions or even tens of billions. To achieve convergence at this scale in a reasonable amount of time, the use of distributed training is needed.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
+{"page_content": "The current PyTorch Distributed Data Parallel (DDP) module enables data parallel training where each process trains the same model but on different shards of data. It enables bulk synchronous, multi-host, multi-GPU/CPU execution of ML training. However, DDP has several shortcomings; e.g. jobs cannot start without acquiring all the requested nodes; jobs cannot continue after a node fails due to error or transient issue; jobs cannot incorporate a node that joined later; and lastly; progress cannot be made with the presence of a slow/stuck node.\n\nThe focus of [PyTorch Elastic](https://github.com/pytorch/elastic), which uses Elastic Distributed Data Parallelism, is to address these issues and build a generic framework/APIs for PyTorch to enable reliable and elastic execution of these data parallel training workloads. It will provide better programmability, higher resilience to failures of all kinds, higher-efficiency and larger-scale training compared with pure DDP.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
+{"page_content": "Elasticity, in this case, means both: 1) the ability for a job to continue after node failure (by running with fewer nodes and/or by incorporating a new host and transferring state to it); and 2) the ability to add/remove nodes dynamically due to resource availability changes or bottlenecks.\n\nWhile this feature is still experimental, you can try it out on AWS EC2, with the instructions [here](https://github.com/pytorch/elastic/tree/master/aws). Additionally, the PyTorch distributed team is working closely with teams across AWS to support PyTorch Elastic training within services such as Amazon Sagemaker and Elastic Kubernetes Service (EKS). Look for additional updates in the near future.\n\n### New Classification Framework", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
+{"page_content": "Image and video classification are at the core of content understanding. To that end, you can now leverage a new end-to-end framework for large-scale training of state-of-the-art image and video classification models. It allows researchers to quickly prototype and iterate on large distributed training jobs at the scale of billions of images. Advantages include:\n\n* Ease of use - This framework features a modular, flexible design that allows anyone to train machine learning models on top of PyTorch using very simple abstractions. The system also has out-of-the-box integration with AWS on PyTorch Elastic, facilitating research at scale and making it simple to move between research and production.\n* High performance - Researchers can use the framework to train models such as Resnet50 on ImageNet in as little as 15 minutes.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
+{"page_content": "You can learn more at the [NeurIPS Expo workshop](https://nips.cc/ExpoConferences/2019/schedule?workshop_id=16) on Multi-Modal research to production or get started with the PyTorch Elastic Imagenet example [here](https://github.com/pytorch/elastic/blob/master/examples/imagenet/main.py).\n\n## Come see us at NeurIPS\n\nThe PyTorch team will be hosting workshops at NeurIPS during the industry expo on 12/8. Join the sessions below to learn more, and visit the team at the PyTorch booth on the show floor and during the Poster Session. At the booth, we\u2019ll be walking through an interactive demo of PyTorch running fast neural style transfer on a Cloud TPU - here\u2019s a [sneak peek](https://colab.research.google.com/github/pytorch/xla/blob/master/contrib/colab/style_transfer_inference-xrt-1-15.ipynb).", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
+{"page_content": "We\u2019re also publishing a [paper that details the principles that drove the implementation of PyTorch](https://papers.nips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library) and how they\u2019re reflected in its architecture.\n\n*[Multi-modal Research to Production](https://nips.cc/ExpoConferences/2019/schedule?workshop_id=16)* - This workshop will dive into a number of modalities such as computer vision (large scale image classification and instance segmentation) and Translation and Speech (seq-to-seq Transformers) from the lens of taking cutting edge research to production. Lastly, we will also walk through how to use the latest APIs in PyTorch to take eager mode developed models into graph mode via Torchscript and quantize them for scale production deployment on servers or mobile devices. Libraries used include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
+{"page_content": "* Classification Framework - a newly open sourced PyTorch framework developed by Facebook AI for research on large-scale image and video classification. It allows researchers to quickly prototype and iterate on large distributed training jobs. Models built on the framework can be seamlessly deployed to production.\n* Detectron2 - the recently released object detection library built by the Facebook AI Research computer vision team. We will articulate the improvements over the previous version including: 1) Support for latest models and new tasks; 2) Increased flexibility, to enable new computer vision research; 3) Maintainable and scalable, to support production use cases.\n* Fairseq - general purpose sequence-to-sequence library, can be used in many applications, including (unsupervised) translation, summarization, dialog and speech recognition.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
{"page_content": "*[Responsible and Reproducible AI](https://nips.cc/ExpoConferences/2019/schedule?workshop_id=14)* - This workshop on Responsible and Reproducible AI will dive into important areas that are shaping the future of how we interpret, reproduce research, and build AI with privacy in mind. We will cover major challenges, walk through solutions, and finish each talk with a hands-on tutorial.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "* Reproducibility: As the number of research papers submitted to arXiv and conferences skyrockets, scaling reproducibility becomes difficult. We must address the following challenges: aid extensibility by standardizing code bases, democratize paper implementation by writing hardware agnostic code, facilitate results validation by documenting \u201ctricks\u201d authors use to make their complex systems function. To offer solutions, we will dive into tool like PyTorch Hub and PyTorch Lightning which are used by some of", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "the top researchers in the world to reproduce the state of the art.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "* Interpretability: With the increase in model complexity and the resulting lack of transparency, model interpretability methods have become increasingly important. Model understanding is both an active area of research as well as an area of focus for practical applications across industries using machine learning. To get hands on, we will use the recently released Captum library that provides state-of-the-art algorithms to provide researchers and developers with an easy way to understand the importance of", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "neurons/layers and the predictions made by our models.`", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "* Private AI: Practical applications of ML via cloud-based or machine-learning-as-a-service platforms pose a range of security and privacy challenges. There are a number of technical approaches being studied including: homomorphic encryption, secure multi-party computation, trusted execution environments, on-device computation, and differential privacy. To provide an immersive understanding of how some of these technologies are applied, we will use the CrypTen project which provides a community based", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "research platform to take the field of Private AI forward.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
+{"page_content": "* Reproducibility: As the number of research papers submitted to arXiv and conferences skyrockets, scaling reproducibility becomes difficult. We must address the following challenges: aid extensibility by standardizing code bases, democratize paper implementation by writing hardware agnostic code, facilitate results validation by documenting \u201ctricks\u201d authors use to make their complex systems function. To offer solutions, we will dive into tool like PyTorch Hub and PyTorch Lightning which are used by some of the top researchers in the world to reproduce the state of the art.\n* Interpretability: With the increase in model complexity and the resulting lack of transparency, model interpretability methods have become increasingly important. Model understanding is both an active area of research as well as an area of focus for practical applications across industries using machine learning. To get hands on, we will use the recently released Captum library that provides state-of-the-art algorithms to provide researchers and developers with an easy way to understand the importance of neurons/layers and the predictions made by our models.`\n* Private AI: Practical applications of ML via cloud-based or machine-learning-as-a-service platforms pose a range of security and privacy challenges. There are a number of technical approaches being studied including: homomorphic encryption, secure multi-party computation, trusted execution environments, on-device computation, and differential privacy. To provide an immersive understanding of how some of these technologies are applied, we will use the CrypTen project which provides a community based research platform to take the field of Private AI forward.", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
{"page_content": "*We\u2019d like to thank the entire PyTorch team and the community for all their contributions to this work.*\n\nCheers!\n\nTeam PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: \"How Computational Graphs are Executed in PyTorch\"\nauthor: Preferred Networks\nfeatured-img: \"\"\n---\n\nWelcome to the last entry into understanding the autograd engine of PyTorch series!\nIf you haven\u2019t read parts [1](https://pytorch.org/blog/overview-of-pytorch-autograd-engine/) & [2](https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/) check them now to understand how PyTorch creates the computational graph for the backward pass!", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "This post is based on PyTorch v1.11, so some highlighted parts may differ across versions.\n\n# PyTorch autograd graph execution\n\nThe last post showed how PyTorch constructs the graph to calculate the outputs' derivatives w.r.t. the inputs when executing the forward pass. Now we will see how the execution of the backward pass is coordinated and done by looking at the whole process, starting from Python down to the lower C++ level internals.\n\n# What Happens when Calling `backward()`/`grad()` from Python", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Using `variable.backward()`\n\nAfter doing all our calculations with an input set to require the gradient, we call `.backward()` on the result to initiate the backward pass execution.\n\n```python\n>>> x = torch.tensor([0.5, 0.75], requires_grad=True)\n>>> y = torch.exp(x).sum()\n>>> y.backward()", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Calling [`.backward()`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/_tensor.py#L307-L363) on a tensor results in a call to [`torch.autograd.backward()`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/autograd/__init__.py#L85-L175).\n```python\n# torch/_tensor.py", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "def backward(self, gradient=None, retain_graph=None, create_graph=False, inputs=None):\n \u2026\n torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)\n\n```\n`torch.autograd.backward()` checks the arguments and calls the autograd engine in the C++ layer.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "``` python\ndef backward(\n tensors: _TensorOrTensors,\n grad_tensors: Optional[_TensorOrTensors] = None,\n retain_graph: Optional[bool] = None,\n create_graph: bool = False,\n grad_variables: Optional[_TensorOrTensors] = None,\n inputs: Optional[_TensorOrTensors] = None,\n) -> None:\n \u2026\n\n if inputs is not None and len(inputs) == 0:\n raise RuntimeError(\"'inputs' argument to backward() cannot be empty.\")", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "tensors = (tensors,) if isinstance(tensors, torch.Tensor) else tuple(tensors)\n inputs = (inputs,) if isinstance(inputs, torch.Tensor) else \\\n tuple(inputs) if inputs is not None else tuple()\n\n grad_tensors_ = _tensor_or_tensors_to_tuple(grad_tensors, len(tensors))\n grad_tensors_ = _make_grads(tensors, grad_tensors_)\n if retain_graph is None:\n retain_graph = create_graph", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Variable._execution_engine.run_backward(\n tensors, grad_tensors_, retain_graph, create_graph, inputs,\n allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "First, whether the `grad_tensors` argument was specified or not, there is a call to the [`_make_grads`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/autograd/__init__.py#L30-L74) function. This is used to check the provided `grad_tensors` or to specify the default value for them by looking at the `tensors` argument values\u2019 shapes. Check the first blog post for details on the default value for the `grad_tensors` of the backward pass. This function just provides the", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "vector of the vector jacobian product if it was not initially specified.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "In the above code, `Variable` has an `_execution_engine` attribute that is defined in [`torch.autograd.variable`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/autograd/variable.py#L14) to be of type `ImperativeEngine`; the C++ engine exported to python and declared in [`torch/csrc/autograd/python_engine.cpp`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/python_engine.cpp#L384). In the following sections, we", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "explain in detail how this object executes the backward pass.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Note that the `torch.autograd.backward` function has an `inputs` optional argument. This argument is used when we want to calculate the `.grad` field of only a subset of input tensors in the forward pass.\n\n```python\n>>> x = torch.tensor([0.5, 0.75], requires_grad=True)\n>>> y = torch.tensor([0.1, 0.90], requires_grad=True)\n>>> z = torch.exp(x * y).sum()\n>>> torch.autograd.backward([z], inputs=[x])\n>>> x.grad\ntensor([0.1051, 1.7676])\n>>> y.grad # None\n>>>\n\n```", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Using `torch.autograd.grad`\n\nAn alternative to `backward()` is to use [`torch.autograd.grad()`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/autograd/__init__.py#L177-L277). The main difference to `backward()` is that `grad()` returns a tuple of tensors with the gradients of the `outputs` w.r.t. the `inputs` kwargs instead of storing them in the `.grad` field of the tensors. As you can see, the `grad()` code shown below is very similar to backward.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```python\ndef grad(\n outputs: _TensorOrTensors,\n inputs: _TensorOrTensors,\n grad_outputs: Optional[_TensorOrTensors] = None,\n retain_graph: Optional[bool] = None,\n create_graph: bool = False,\n only_inputs: bool = True,\n allow_unused: bool = False,\n is_grads_batched: bool = False\n) -> Tuple[torch.Tensor, ...]:\n \n outputs = (outputs,) if isinstance(outputs, torch.Tensor) else tuple(outputs)\n inputs = (inputs,) if isinstance(inputs, torch.Tensor) else tuple(inputs)", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "overridable_args = outputs + inputs\n if has_torch_function(overridable_args):\n return handle_torch_function(\n grad,\n overridable_args,\n outputs,\n inputs,\n grad_outputs=grad_outputs,\n retain_graph=retain_graph,\n create_graph=create_graph,\n only_inputs=only_inputs,\n allow_unused=allow_unused,\n )", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "grad_outputs_ = _tensor_or_tensors_to_tuple(grad_outputs, len(outputs))\n grad_outputs_ = _make_grads(outputs, grad_outputs_)\n\n if retain_graph is None:\n retain_graph = create_graph\n\n if is_grads_batched:\n # \u2026. It will not be covered here\n else:\n return Variable._execution_engine.run_backward(\n outputs, grad_outputs_, retain_graph, create_graph, inputs,\n allow_unused, accumulate_grad=False) # Calls into the C++ engine to run the backward pass", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Figure 1 shows the computational graph with the `backward()` and `grad()` arguments highlighted in red and blue, respectively:\n\n\n
\n
\n\n\nFgiure 1: Correspondence of `backward`/`grad` arguments in the graphs.\n
\n\n# Going Inside the Autograd Engine", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Refreshing Concepts: Nodes and Edges\n\nAs we saw in [2](https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/)\nThe computational graph comprises `Node` and `Edge` objects. Please read that post if you haven\u2019t done it yet.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Nodes", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "`Node` objects are defined in [`torch/csrc/autograd/function.h`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/function.h#L105-L176), and they provide an overload of `operator()` for the associated function and a list of edges to do the graph traversal. Note that `Node` is a base class that autograd functions inherit from and override the `apply` method to execute the backward function.\n```c++\nstruct TORCH_API Node : std::enable_shared_from_this {", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "...\n /// Evaluates the function on the given inputs and returns the result of the\n /// function call.\n variable_list operator()(variable_list&& inputs) {\n ...\n }", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "protected:\n /// Performs the `Node`'s actual operation.\n virtual variable_list apply(variable_list&& inputs) = 0;\n \u2026\n edge_list next_edges_;\n uint64_t topological_nr_ = 0;\n \u2026", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "There is an attribute called [`topological_nr_`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/function.h#L481) in every node object. This number is used to optimize the graph execution as it allows to discard of graph branches under certain conditions. The topological number is the longest distance between this node and any leaf node and it is shown in Figure 2. Its main property is that for any pair of nodes `x`, `y` in a directed graph `topo_nr(x) <", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "topo_nr(y)` means that there is no path from `x` to `y`. So this allows for reducing the number of paths in the graph in need of traversal. Check the [topological_nr](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/function.h#L314-L343)", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": ") method comment for further details.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n\nFigure 2: Example of the Topological Number calculation\n
", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Edges\n\nThe [`Edge`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/edge.h#L14-L39) object links `Node`s together, and its implementation is straightforward.\n\n```c++\nstruct Edge {\n ...\n /// The function this `Edge` points to.\n std::shared_ptr function;\n /// The identifier of a particular input to the function.\n uint32_t input_nr;\n};", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "It only requires a function pointer to the `Node` and an input number that is the index of the output from the forward function this edge points to. When preparing the set of gradients before calling \"function\", we know that what is flowing from this edge should be accumulated in the \"input_nr\"th argument. Note that the input/output name is flipped here and this is the input to the backward function.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "`Edge` objects are constructed using the [`gradient_edge`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/variable.cpp#L221-L233) function method.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\n Edge gradient_edge(const Variable& self) {\n if (const auto& gradient = self.grad_fn()) {\n return Edge(gradient, self.output_nr());\n } else {\n return Edge(grad_accumulator(self), 0);\n }\n }\n\n```", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Entering the C++ Realm", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Once that `torch.autograd.backward()` has been invoked, the\n[`THPEngine_run_backward`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/python_engine.cpp#L152-L286) routine starts the graph traversal. Following is a schema of the function body:\n```c++\nPyObject *THPEngine_run_backward(PyObject *self, PyObject *args, PyObject *kwargs)\n{\n HANDLE_TH_ERRORS\n PyObject *tensors = nullptr;\n PyObject *grad_tensors = nullptr;\n unsigned char keep_graph = 0;", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "unsigned char create_graph = 0;\n PyObject *inputs = nullptr;\n \n // Convert the python arguments to C++ objects\n const char *accepted_kwargs[] = { // NOLINT\n \"tensors\", \"grad_tensors\", \"keep_graph\", \"create_graph\", \"inputs\",\n \"allow_unreachable\", \"accumulate_grad\", nullptr\n };\n if (!PyArg_ParseTupleAndKeywords(args, kwargs, \"OObb|Obb\", (char**)accepted_kwargs,\n &tensors, &grad_tensors, &keep_graph, &create_graph, &inputs, &allow_unreachable, &accumulate_grad))", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "// Prepare arguments\n for(const auto i : c10::irange(num_tensors)) {\n // Check that the tensors require gradients\n }\n\n std::vector output_edges;\n if (inputs != nullptr) {\n // Prepare outputs\n }\n\n {\n // Calls the actual autograd engine\n pybind11::gil_scoped_release no_gil;\n outputs = engine.execute(roots, grads, keep_graph, create_graph, accumulate_grad, output_edges);\n }\n // Clean up and finish\n}", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "First, we prepare the input arguments after converting the `PyObject` arguments to actual C++ objects. The `tensors` list contains the tensors from which we start the backward pass. These tensors are converted to edges using `torch::autograd::impl::gradient_edge` and added to a list called `roots` where the graph traversal starts.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\n edge_list roots;\n roots.reserve(num_tensors);\n variable_list grads;\n grads.reserve(num_tensors);\n for(const auto i : c10::irange(num_tensors)) {\n PyObject *_tensor = PyTuple_GET_ITEM(tensors, i);\n const auto& variable = THPVariable_Unpack(_tensor);\n auto gradient_edge = torch::autograd::impl::gradient_edge(variable);\n roots.push_back(std::move(gradient_edge));", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "PyObject *grad = PyTuple_GET_ITEM(grad_tensors, i);\n if (THPVariable_Check(grad)) {\n const Variable& grad_var = THPVariable_Unpack(grad);\n grads.push_back(grad_var);\n } \n }", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Now, if the `inputs` argument was specified in `backward` or we used the `torch.autograd.grad` api, the following code creates a list of edges to accumulate the gradients in the specified tensors at the end of the computation. The engine uses this later to optimize the execution as it doesn\u2019t add the gradients in all the leaf nodes, just the specified ones.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\n std::vector output_edges;\n if (inputs != nullptr) {\n int num_inputs = PyTuple_GET_SIZE(inputs);\n output_edges.reserve(num_inputs);\n for (const auto i : c10::irange(num_inputs)) {\n PyObject *input = PyTuple_GET_ITEM(inputs, i);\n const auto& tensor = THPVariable_Unpack(input);\n const auto output_nr = tensor.output_nr();\n auto grad_fn = tensor.grad_fn();\n if (!grad_fn) {\n grad_fn = torch::autograd::impl::try_get_grad_accumulator(tensor);\n }", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "if (accumulate_grad) {\n tensor.retain_grad();\n }\n if (!grad_fn) {\n output_edges.emplace_back(std::make_shared(), 0);\n } else {\n output_edges.emplace_back(grad_fn, output_nr);\n }\n }\n }", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "The next step is the actual graph traversal and node function execution, and finally, the cleanup and return.\n\n```c++\n {\n // Calls the actual autograd engine\n pybind11::gil_scoped_release no_gil;\n auto& engine = python::PythonEngine::get_python_engine();\n outputs = engine.execute(roots, grads, keep_graph, create_graph, accumulate_grad, output_edges);\n }\n // Clean up and finish\n}", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "# Starting the Real Execution\n\n`engine.execute`is present in [torch/csrc/autograd/engine.cpp](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L969-L1044) \n\nThere are two differentiated steps here:\n\nAnalyze the graph to find dependencies between functions\nCreate worker threads that traverse the graph", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Data Structures Used for the Execution", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "GraphTask\n\nAll the execution metadata is managed by the [`GraphTask`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.h#L51-L196) class in [torch/csrc/autograd/engine.h](https://github.com/pytorch/pytorch/blob/release/1.11/torch/csrc/autograd/engine.h)", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\nstruct GraphTask: std::enable_shared_from_this {\n std::atomic outstanding_tasks_{0};\n // \u2026 \n std::unordered_map not_ready_;\n std::unordered_map dependencies_;\n\n struct ExecInfo {\n // \u2026\n };\n std::unordered_map exec_info_;\n std::vector captured_vars_;\n // \u2026\n std::shared_ptr cpu_ready_queue_;\n};", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Here we see a series of variables dedicated to maintaining the execution state.\n`outstanding_tasks_` tracks the number of tasks left to be executed for the backward pass to complete. `not_ready_` holds the input arguments for the `Node`s that are not ready to be executed. `dependencies_` track the number of predecessors that a `Node` has. As the count reaches `0`, the `Node` is ready for execution; it is placed in a ready queue to be retrieved and executed later.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "`exec_info_` and the associated `ExecInfo` struct are used only when the `inputs` argument is specified or it is a call to `autograd.grad()`. They allow filter paths on the graph that are not needeed since only the gradients are calculated only for the variables in the `inputs` list.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "`captured_vars_` is where the results of the graph execution are temporarily stored if we used the `torch.autograd.grad()` api instead of `torch.autograd.backward()` since `grad()` returns the gradients as tensors instead of just filling the `.grad` field of the inputs.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "NodeTask", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "The [`NodeTask`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.h#L210-L242) struct is a basic class that holds an `fn_` pointer to the node to execute, and an `inputs_` buffer to store the input arguments to this function. Note that the functions executed by the backward pass are the derivatives specified in the `derivatives.yaml` file. or the user provided backward function when using custom functions as described in the second blog post.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "The `inputs_` buffer is also where the output gradients of the previously executed functions are aggregated, and it is defined as a [`std::vector` container](https://github.com/pytorch/pytorch/blob/release/1.10/torch/csrc/autograd/input_buffer.h) with facilities to accumulate values at a given position.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\nstruct NodeTask {\n std::weak_ptr base_;\n std::shared_ptr fn_;\n // This buffer serves as an implicit \"addition\" node for all of the\n // gradients flowing here. Once all the dependencies are finished, we\n // use the contents of this buffer to run the function.\n InputBuffer inputs_;\n};\n\n```", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "GraphRoot\n\nThe [`GraphRoot`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/functions/basic_ops.h#L72-L89) is a special function used to hold multiple input variables in a single place. The code is pretty simple as it only acts as a container of variables.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\nstruct TORCH_API GraphRoot : public Node {\n GraphRoot(edge_list functions, variable_list inputs)\n : Node(std::move(functions)),\n outputs(std::move(inputs)) {\n for (const auto& t : outputs) {\n add_input_metadata(t);\n }\n }\n\n variable_list apply(variable_list&& inputs) override {\n return outputs;\n }\n\n```", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "AccumulateGrad\n\nThis function is set during the graph creation in `gradient_edge` when the `Variable` object doesn\u2019t have a `grad_fn`. This is, it is a leaf node.\n\n```c++\n if (const auto& gradient = self.grad_fn()) {\n // \u2026\n } else {\n return Edge(grad_accumulator(self), 0);\n }", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "The function body is defined in [`torch/csrc/autograd/functions/accumulate_grad.cpp`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/functions/accumulate_grad.cpp#L25-L63) and it essentially accumulates the input grads in the object\u2019s `.grad` attribute.\n\n```c++\nauto AccumulateGrad::apply(variable_list&& grads) -> variable_list {\n check_input_variables(\"AccumulateGrad\", grads, 1, 0);\n \u2026", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "at::Tensor new_grad = callHooks(variable, std::move(grads[0]));\n std::lock_guard lock(mutex_);\n\n at::Tensor& grad = variable.mutable_grad();\n accumulateGrad(\n variable,\n grad,\n new_grad,\n 1 + !post_hooks().empty() /* num_expected_refs */,\n [&grad](at::Tensor&& grad_update) { grad = std::move(grad_update); });\n return variable_list();\n}\n}} // namespace torch::autograd", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "[`accumulateGrad`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/functions/accumulate_grad.h#L100)\ndoes several checks on the tensors format and eventually performs the `variable_grad += new_grad;` accumulation.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Preparing the graph for execution\n\nNow, let\u2019s walk through [`Engine::execute`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L969-L1126). The first thing to do besides arguments consistency checks is to create the actual `GraphTask` object we described above. This object keeps all the metadata of the graph execution.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\nauto Engine::execute(const edge_list& roots,\n const variable_list& inputs,\n bool keep_graph,\n bool create_graph,\n bool accumulate_grad,\n const edge_list& outputs) -> variable_list {\n\n validate_outputs(roots, const_cast(inputs), [](const std::string& msg) {\n return msg;\n });\n\n // Checks", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "auto graph_task = std::make_shared(\n /* keep_graph */ keep_graph,\n /* create_graph */ create_graph,\n /* depth */ not_reentrant_backward_call ? 0 : total_depth + 1,\n /* cpu_ready_queue */ local_ready_queue);\n\n // If we receive a single root, skip creating extra root node\n // \u2026\n // Prepare graph by computing dependencies\n // \u2026\n // Queue the root \n // \u2026\n // launch execution\n // \u2026\n}", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "After creating the `GraphTask`, we use its associated function if we only have one root node. If we have multiple root nodes, we create a special `GraphRoot` object as described before.\n\n```c++\n bool skip_dummy_node = roots.size() == 1;\n auto graph_root = skip_dummy_node ?\n roots.at(0).function :\n std::make_shared(roots, inputs);", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "The next step is to fill the `dependencies_` map in the `GraphTask` object since the engine must know when it can execute a task. The `outputs` here is the `inputs` argument passed to the `torch.autograd.backward()` call in Python. But here, we have reversed the names since the gradients w.r.t. the inputs of the forward pass are now the outputs of the backward pass. And from now on, there is no concept of forward/backward, but only graph traversal and execution.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\n auto min_topo_nr = compute_min_topological_nr(outputs);\n // Now compute the dependencies for all executable functions\n compute_dependencies(graph_root.get(), *graph_task, min_topo_nr);\n\n if (!outputs.empty()) {\n graph_task->init_to_execute(*graph_root, outputs, accumulate_grad, min_topo_nr);\n }", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Here we preprocess the graph for the execution of the nodes. First, [`compute_min_topological_nr`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L922-L933) is called to to obtain the minimum topological number of the tensors specified in `outputs` (0 if no `inputs` kwarg was supplied to `.backward` or `input` for `.grad`). This computation prunes paths in the graph that lead to input variables of which we don\u2019t want/need to calculate the", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "grads.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Second, is the [`compute_dependencies`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L935-L967) call. This function is a very simple graph traversal that starts with the root `Node`, and for each of the edges in `node.next_edges()` it increments the counter in `dependencies_`. Figure 3 shows the result of the dependencies calculation for the example graph. Note that the number of dependencies of any node is just the number of edges arriving", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "at it.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n\nFigure 3: Number of dependencies for each node\n
", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Finally, the [`init_to_execute`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L1281-L1383) call, this is the one that populates the `GraphTask::exec_info_` map in case that `inputs` were specified in the python `backward` call. It iterates the graph again, starting from the root, and records in the `exec_info_` map the intermediate nodes needed to calculate only the given `inputs` gradients.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\n // Queue the root\n if (skip_dummy_node) {\n InputBuffer input_buffer(roots.at(0).function->num_inputs());\n auto input = inputs.at(0);\n\n\n input_buffer.add(roots.at(0).input_nr,\n std::move(input),\n input_stream,\n opt_next_stream);", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "execute_with_graph_task(graph_task, graph_root, std::move(input_buffer));\n } else {\n execute_with_graph_task(graph_task, graph_root, InputBuffer(variable_list()));\n }\n // Avoid a refcount bump for the Future, since we check for refcount in\n // DistEngine (see TORCH_INTERNAL_ASSERT(futureGrads.use_count() == 1)\n // in dist_engine.cpp).\n auto& fut = graph_task->future_result_;\n fut->wait();\n return fut->value().toTensorVector();\n}", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "And now, we are ready to start the actual execution by creating the `InputBuffer`. In case we only have one root variable, we begin by copying the value of the `inputs` tensor (this is the `gradients` passed to python `backward`) in position 0 of the input_buffer. This is a small optimization that avoids running the `RootNode` for no reason. Also, if the rest of the graph is not on the cpu, we directly start on that worker while the `RootNode` is always placed on the cpu ready queue. Details of the workers", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "and ready queues are explained in the section below.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "On the other hand, if we have multiple roots, the `GraphRoot` object also holds the inputs, so it is enough to pass it an empty `InputBuffer`.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Graph Traversal and Node Execution", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Devices, Threads and Queues\n\nBefore diving into the actual execution, we need to see how the engine is structured.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "First of all, the engine is multithreaded with one thread per device. For example, the caller thread is associated with the CPU while additional threads are created and associated with each GPU or other devices available in the system. Each thread tracks its device using thread-local storage in the [`worker_device`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L69) variable. In addition, the threads have a queue of tasks to be executed also", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "located in thread-local storage, the [`local_ready_queue`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L103-L104). This is where work is queued for this thread to execute in the `thread_main` function that is explained later.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "You will wonder how the device where a task should be executed is decided. The `InputBuffer` class has a [`device()`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/input_buffer.cpp#L173-L189) function that returns the first non-cpu device of all its tensors.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "This function is used together with [`Engine::ready_queue`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L1181-L1190) to select the queue to queue a task.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\nauto Engine::ready_queue(std::shared_ptr cpu_ready_queue, at::Device device) -> std::shared_ptr{\n if (device.type() == at::kCPU || device.type() == at::DeviceType::Meta) {\n return cpu_ready_queue;\n } else {\n // See Note [Allocating GPUs to autograd threads]\n return device_ready_queues_.at(device.index());\n }\n}", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "The [`ReadyQueue`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.h#L245-L283) object is defined in `torch/csrc/autograd/engine.h` and it is a simple wrapper over `std::priority_queue` that allows a thread to [wait for a task](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L219) if it\u2019s empty. One interesting property of the `ReadyQueue` is that it increases the", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "[`GraphTask::outstanding_tasks_`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L195) value used to determine if the execution has completed or not.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\nauto ReadyQueue::push(NodeTask item, bool incrementOutstandingTasks) -> void {\n {\n std::lock_guard lock(mutex_);\n if (incrementOutstandingTasks) {\n std::shared_ptr graph_task = item.base_.lock();\n ++graph_task->outstanding_tasks_;\n }\n heap_.push(std::move(item));\n }\n not_empty_.notify_one();\n}", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "auto ReadyQueue::pop() -> NodeTask {\n std::unique_lock lock(mutex_);\n not_empty_.wait(lock, [this]{ return !heap_.empty(); });\n auto task = std::move(const_cast(heap_.top())); heap_.pop();\n return task;\n}\n\n```", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Reentrant Backward\n\nA reentrant backward happens when one of the tasks in a backward pass calls again `backward`. It is not a very common case, but it can be used to reduce memory utilization as it could potentially avoid saving intermediate results. For more information, check this [PyTorch forum post](https://discuss.pytorch.org/t/what-is-the-scenario-of-reentrant-backwards-in-pytorch-source-code/19330/2).", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```python\nclass ReentrantBackward(torch.autograd.Function):\n @staticmethod\n def forward(ctx, input):\n return input.sum()\n\n @staticmethod\n def backward(ctx, input):\n # Let's compute the backward by using autograd\n input = input.detach().requires_grad_()\n with torch.enable_grad():\n out = input.sum()\n out.backward() # REENTRANT CALL!!\n return out.detach()", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Here, we call `backward()` inside `backward()` for a user custom-defined autograd function.\nThis situation can lead to deadlocks because the first backward needs to wait for the second one to complete. But some internal implementation details can prevent the second backward from completing as it is explained in the dedicated subsection.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Thread Initialization\n\n[`execute_with_graph_task`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L1054-L1126) is in charge of initializing the threads taking care of the computation and placing the `root` node in the queue of the device that produced it.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\nc10::intrusive_ptr Engine::execute_with_graph_task(\n const std::shared_ptr& graph_task,\n std::shared_ptr graph_root,\n InputBuffer&& input_buffer) {\n\n initialize_device_threads_pool();\n // Lock mutex for GraphTask.\n std::unique_lock lock(graph_task->mutex_);\n\n auto queue = ready_queue(graph_task->cpu_ready_queue_, input_buffer.device());", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "if (worker_device == NO_DEVICE) {\n set_device(CPU_DEVICE);\n graph_task->owner_ = worker_device;\n queue->push(NodeTask(graph_task, std::move(graph_root), std::move(input_buffer)));\n lock.unlock();\n thread_main(graph_task);\n worker_device = NO_DEVICE;\n } else {\n // This deals with reentrant backwards, we will see it later.\n }\n return graph_task->future_result_;\n}", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "First, this function initializes several threads (one per device) calling [` initialize_device_threads_pool()`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L1046-L1052) where several things happen:\nOne `ReadyQueue` per device is created.\nOne thread per non-cpu device is created.\nA thread local `worker_device` variable is set to track the current device associated with the thread.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "`thread_main` function is called, and threads wait for tasks to be put in their queues.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Then it retrieves the queue to place the root node based on the device that holds the tensors present in the `input_buffer` using the `ready_queue` function. Now, the main thread (the one also executing the Python interpreter) has its `worker_device` set to `NO_DEVICE`, and it is in charge of executing functions with all its tensors living in the cpu. If `worker_device` is set to any other value, the graph execution is already started, and `.backward()` was called inside a running `Node`, creating a", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "reentrant backward call. This is explained later. For now,", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "the main thread places the task in the queue and call `thread_main`.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Where the Magic Happens\n\nIt\u2019s been a long way, but finally, we are ready to traverse the graph and execute the nodes. Each of the spawned threads, and the main thread call [`thread_main`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L377-L464).\n\n```c++\nauto Engine::thread_main(const std::shared_ptr& graph_task) -> void {", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "while (graph_task == nullptr || !graph_task->future_result_->completed()) {\n std::shared_ptr local_graph_task;\n {\n NodeTask task = local_ready_queue->pop();\n\n if (task.isShutdownTask_) {\n break;\n }\n\n if (!(local_graph_task = task.base_.lock())) {\n // GraphTask for function is no longer valid, skipping further\n // execution.\n continue;\n }", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "if (task.fn_ && !local_graph_task->has_error_.load()) {\n at::ThreadLocalStateGuard tls_guard(local_graph_task->thread_locals_);", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "try {\n GraphTaskGuard guard(local_graph_task);\n NodeGuard ndguard(task.fn_);\n {\n evaluate_function(\n local_graph_task,\n task.fn_.get(),\n task.inputs_,\n local_graph_task->cpu_ready_queue_);\n }\n } catch (std::exception& e) {\n thread_on_exception(local_graph_task, task.fn_, e);\n }\n }\n }", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "// Decrement the outstanding tasks.\n --local_graph_task->outstanding_tasks_;", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "// Check if we've completed execution.\n if (local_graph_task->completed()) {\n local_graph_task->mark_as_completed_and_run_post_processing();\n auto base_owner = local_graph_task->owner_;\n if (worker_device != base_owner) {\n std::atomic_thread_fence(std::memory_order_release);\n ready_queue_by_index(local_graph_task->cpu_ready_queue_, base_owner)\n ->push(NodeTask(local_graph_task, nullptr, InputBuffer(0)));\n }\n }\n }\n}", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "The code here is simple, given the `local_ready_queue` assigned to each thread in thread-local storage. The threads loop until there are no tasks left to execute in the graph. Note that for device-associated threads, the passed `graph_task` argument is [`nullptr`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L326-L327), and they block in `local_ready_queue->pop()` until a task is pushed in their queue. After some consistency checks (the task", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "type is shutdown, or the graph is still valid). We get to the actual function invocation in `evaluate_function`.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\n try {\n GraphTaskGuard guard(local_graph_task);\n NodeGuard ndguard(task.fn_);\n {\n evaluate_function(\n local_graph_task,\n task.fn_.get(),\n task.inputs_,\n local_graph_task->cpu_ready_queue_);\n }\n } catch (std::exception& e) {\n thread_on_exception(local_graph_task, task.fn_, e);\n }\n }", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "After calling `evaluate_function`, we check if the `graph_task` execution is complete by looking the `outstanding_tasks_` number. This number increases when a task is pushed to a queue and is decreased in `local_graph_task->completed()` when a task is executed. When the execution is done, we return the results that are be in the `captured_vars_` in case we called `torch.autograd.grad()` instead of `torch.autograd.backward()` as this function returns tensors instead of storing them in the `.grad` attribute", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "of the inputs. Finally we wake up the main thread if it\u2019s waiting by sending a dummy task.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\n // Decrement the outstanding tasks.\n --local_graph_task->outstanding_tasks_;", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "// Check if we've completed execution.\n if (local_graph_task->completed()) {\n local_graph_task->mark_as_completed_and_run_post_processing();\n auto base_owner = local_graph_task->owner_;\n if (worker_device != base_owner) {\n std::atomic_thread_fence(std::memory_order_release);\n ready_queue_by_index(local_graph_task->cpu_ready_queue_, base_owner)\n ->push(NodeTask(local_graph_task, nullptr, InputBuffer(0)));\n }\n }\n\n```", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Calling the Function and Unlocking New Tasks\n\n[`evaluate_function`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L786-L920) serves three purposes:\n\nRun the function.\nAccumulate its results in the next node `InputBuffers`.\nDecrease the dependencies counter of the next nodes and enqueues the tasks reaching 0 to be executed.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\nvoid Engine::evaluate_function(\n std::shared_ptr& graph_task,\n Node* func,\n InputBuffer& inputs,\n const std::shared_ptr& cpu_ready_queue) {\n\n // If exec_info_ is not empty, we have to instrument the execution\n auto& exec_info_ = graph_task->exec_info_;\n if (!exec_info_.empty()) {\n // Checks if the function needs to be executed \n if (!fn_info.needed_) {\n // Skip execution if we don't need to execute the function.\n return;\n }\n }", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "auto outputs = call_function(graph_task, func, inputs);\n\n auto& fn = *func;\n if (!graph_task->keep_graph_) {\n fn.release_variables();\n }", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Initially, we check the `exec_info_` map of the `GraphTask` structure to determine if the current node needs to be executed. Remember that if this map is empty, all the nodes are executed because we are calculating the grads for all the inputs of the forward pass.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "After this check, the function is executed by running [`call_function`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L735-L784). Its implementation is very straightforward and calls the actual derivative function and registered hooks if any.\n\n```c++\n int num_outputs = outputs.size();\n if (num_outputs == 0) {\n // Records leaf stream (if applicable)\n return;\n }", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "if (AnomalyMode::is_enabled()) {\n // check for nan values in result\n }", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Next, we check the outputs of the function after `call_function` is done. If the number of outputs is 0, there are no following nodes to be executed so we can safely return. This is the case of the `AccumulateGrad` node associated with the leaf nodes.\n\n Also, the check for `NaN` values in the gradients is done here if requested.\n```c++", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "std::lock_guard lock(graph_task->mutex_);\n for (const auto i : c10::irange(num_outputs)) {\n auto& output = outputs[i];\n const auto& next = fn.next_edge(i);\n\n if (!next.is_valid()) continue;", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "We have now executed a `grad_fn` that has returned one gradient per each of the associated forward pass function inputs. As we saw in the [previous blog post](https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/#linking-nodes-together), we have an `Edge` object per each of these input tensors, and the `grad_fn` of the function producing them in the forward pass. Essentially, Output[0] of the node in the backward pass, corresponds to the first argument of the forward pass associated", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "function. Figure 4 shows how the outputs of a backward function are related to the inputs of the forward function. See that the outputs of `grad_fn C` are the gradients of `z` w.r.t. the inputs of `Function C`", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "\n
\n
\n\n\nFigure 4: Correspondence between forward and backward functions inputs and outputs\n
\n\nWe now iterate through these edges and check if the associated functions are ready to be executed.\n\n```c++\n // Check if the next function is ready to be computed\n bool is_ready = false;\n auto& dependencies = graph_task->dependencies_;\n auto it = dependencies.find(next.function.get());", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "if (it == dependencies.end()) {\n auto name = next.function->name();\n throw std::runtime_error(std::string(\"dependency not found for \") + name);\n } else if (--it->second == 0) {\n dependencies.erase(it);\n is_ready = true;\n }\n\n auto& not_ready = graph_task->not_ready_;\n auto not_ready_it = not_ready.find(next.function.get());", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "For this, we check the `graph_task->dependencies_` map. We decrement the counter, and if it reaches 0, we mark the function pointed by the edge ready to be executed. Following, we prepare the input buffers of the tasks indicated by the next edges.\n\n```c++\n if (not_ready_it == not_ready.end()) {\n if (!exec_info_.empty()) {\n // Skip functions that aren't supposed to be executed\n }", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "// Creates an InputBuffer and moves the output to the corresponding input position\n InputBuffer input_buffer(next.function->num_inputs());\n input_buffer.add(next.input_nr,\n std::move(output),\n opt_parent_stream,\n opt_next_stream);", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "if (is_ready) {\n auto queue = ready_queue(cpu_ready_queue, input_buffer.device());\n queue->push(\n NodeTask(graph_task, next.function, std::move(input_buffer)));\n } else {\n not_ready.emplace(next.function.get(), std::move(input_buffer));\n }", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Here, we look for the task in the `graph_task->not_ready_` map. If it is not present, we create a new `InputBuffer` object and set the current output in the `input_nr` position of the buffer associated with the edge. If the task is ready to be executed, we enqueue it in the appropriate device `ready_queue` and complete the execution. However, if the task is not ready and we have seen it before, it is present in the `not_ready_map_`.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\n } else {\n // The function already has a buffer\n auto &input_buffer = not_ready_it->second;\n // Accumulates into buffer\n input_buffer.add(next.input_nr,\n std::move(output),\n opt_parent_stream,\n opt_next_stream);\n if (is_ready) {\n auto queue = ready_queue(cpu_ready_queue, input_buffer.device());\n queue->push(NodeTask(graph_task, next.function, std::move(input_buffer)));", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "not_ready.erase(not_ready_it);\n }\n }\n }\n}", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "In this case, we accumulate the output in the existing `input_buffer` instead of creating a new one. Once all the tasks are processed, the worker thread exits the loop and complete.\nAll this process is summarized in the animation in Figure 5. We see how a thread peeks at the tasks in the ready queue and decrements the next nodes' dependencies, unlocking them for execution.\n\n\n
\n
", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "\nFigure 5: Animation of the execution of the computational graph\n
", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Flow with Reentrant Backward\n\nAs we saw above, the reentrant backward problem is when the currently executed function does a nested call to `backward`. When this happens, the thread running this function goes all the way down to `execute_with_graph_task` as in the non-reentrant case, but here is when things are different.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\nc10::intrusive_ptr Engine::execute_with_graph_task(\n const std::shared_ptr& graph_task,\n std::shared_ptr graph_root,\n InputBuffer&& input_buffer) {\n\n initialize_device_threads_pool();\n // Lock mutex for GraphTask.\n std::unique_lock lock(graph_task->mutex_);\n\n auto queue = ready_queue(graph_task->cpu_ready_queue_, input_buffer.device());", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "if (worker_device == NO_DEVICE) {\n //Regular case\n } else {\n // If worker_device is any devices (i.e. CPU, CUDA): this is a re-entrant\n // backward call from that device.\n graph_task->owner_ = worker_device;\n\n // Now that all the non-thread safe fields of the graph_task have been populated,\n // we can enqueue it.\n queue->push(NodeTask(graph_task, std::move(graph_root), std::move(input_buffer)));", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "if (current_depth >= max_recursion_depth_) {\n // If reached the max depth, switch to a different thread\n add_thread_pool_task(graph_task);\n } else {\n ++total_depth;\n ++current_depth;\n lock.unlock();\n thread_main(graph_task);\n --current_depth;\n --total_depth;\n }\n }\n return graph_task->future_result_;\n}", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Here, `execute_with_graph_task` detects this as a reentrant call and then looks for the current number of nested calls. If it exceeds the limit, we create a new thread to take care of the execution of this graph, and if not, we execute this reentrant call regularly.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "The limit of nested calls was originally set to avoid stack overflow due to reentrant calls creating very large call stacks. However, the number was further reduced when sanitizer tests were added because of the maximum amount of locks a thread can hold at a given moment. This can be seen in [`torch/csrc/autograd/engine.h`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.h#L36-L42).", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "When this maximum depth is exceeded, a new thread is created with the [`add_thread_pool_task`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L1239-L1255) function.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\nvoid Engine::add_thread_pool_task(const std::weak_ptr& graph_task) {\n std::unique_lock lck(thread_pool_shared_->mutex_);\n // if we have pending graph_task objects to be processed, create a worker.\n bool create_thread = (thread_pool_shared_->num_workers_ <= thread_pool_shared_->graphtasks_queue_.size());\n thread_pool_shared_->graphtasks_queue_.push(graph_task);", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "lck.unlock();\n if (create_thread) {\n std::thread t(&Engine::reentrant_thread_init, this);\n t.detach();\n }\n\n thread_pool_shared_->work_.notify_one();\n}", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Before going in-depth, let's look at the `thread_pool_shared_` object in the [`Engine`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.h#L421) which manages all the information related to the threads associated to the reentrant backward calls.\n\n```c++\n struct ThreadPoolShared {\n unsigned int num_workers_;\n std::condition_variable work_;\n std::mutex mutex_;\n std::queue> graphtasks_queue_;", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "// NOLINTNEXTLINE(cppcoreguidelines-pro-type-member-init)\n ThreadPoolShared() : num_workers_(0) {}\n };", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "[`ThreadPoolShared`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.h#L398-L414) is a simple container holding a queue of `GraphTask` objects with synchronization mechanisms and the number of current workers.\n\nNow it is easy to understand how `add_thread_pool_task` creates a thread when there are `graph_task` objects enqueued and insufficient workers to process them.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "`add_thread_pool_task` initializes a thread by executing [`reentrant_thread_init`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L471-L493)", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\nvoid Engine::reentrant_thread_init() {\n at::init_num_threads();\n auto tp_shared = thread_pool_shared_;\n while(true) {\n std::unique_lock lk(tp_shared->mutex_);\n ++thread_pool_shared_->num_workers_;\n tp_shared->work_.wait(lk, [&tp_shared]{ return !tp_shared->graphtasks_queue_.empty();});\n --thread_pool_shared_->num_workers_;\n auto task = tp_shared->graphtasks_queue_.front();\n tp_shared->graphtasks_queue_.pop();\n lk.unlock();\n std::shared_ptr graph_task;", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "if (!(graph_task = task.lock())) {\n continue;\n }\n set_device(graph_task->owner_);\n // set the local_ready_queue to the ready queue on the graph_task->owner_ device\n local_ready_queue = ready_queue_by_index(graph_task->cpu_ready_queue_, graph_task->owner_);\n total_depth = graph_task->reentrant_depth_;\n thread_main(graph_task);\n }\n}", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "The code is straightforward. The newly created thread waits on the `thread_pool_shared->graphtasks_queue_` for reentrant backward graphs to be available and executes them. Notice that this thread uses the task-ready queue associated with the device of the thread that started this call by accessing the `graph_task->owner_` field set in the [`execute_with_graph_task`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L1092) function.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "Error Handling\n\nWhenever an error happens in one of the worker threads. It will be propagated to the `backward` calling thread.\n\nTo achieve this, there is a try/catch block in the [`thread_main`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L415-L438) that catches any exception in the `Node` function call and sets it to the associated `GraphTask` object.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\n try {\n \u2026\n GraphTaskGuard guard(local_graph_task);\n NodeGuard ndguard(task.fn_);\n {\n evaluate_function(\n \u2026\n }\n } catch (std::exception& e) {\n thread_on_exception(local_graph_task, task.fn_, e);\n }\n }\n }", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "[`thread_on_exception`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L495-L500) and the [functions it calls](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L605-L621) end up setting the exception in the `local_graph_task` object.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\nvoid Engine::thread_on_exception(\n std::shared_ptr graph_task,\n const std::shared_ptr& fn,\n std::exception& e) {\n graph_task->set_exception(std::current_exception(), fn);\n}\n\nvoid GraphTask::set_exception_without_signal(const std::shared_ptr& fn) {\n if (!has_error_.exchange(true)) {\n if (AnomalyMode::is_enabled() && fn) {\n fn->metadata()->print_stack(fn->name());\n }\n }\n}", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "void GraphTask::set_exception(\n std::exception_ptr eptr,\n const std::shared_ptr& fn) {\n set_exception_without_signal(fn);\n if (!future_completed_.exchange(true)) {\n // NOLINTNEXTLINE(performance-move-const-arg)\n future_result_->setError(std::move(eptr));\n }\n}", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "In `set_exception` it sets the `has_error_` flag to `true` and it calls the [`setError`]()\nfunction of the [`future_result_`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/aten/src/ATen/core/ivalue_inl.h#L770-L1322) object. This will make the error to be re-thrown at the caller thread when `future_result_->value()` is accessed.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "```c++\n IValue value() {\n std::unique_lock lock(mutex_);\n AT_ASSERT(completed());\n if (eptr_) {\n std::rethrow_exception(eptr_);\n }\n return value_;\n }", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "# Closing Remarks\n\nThis has been the last post of this series covering how PyTorch does the auto differentiation. We hope you enjoyed reading it and that now you are familiar enough with PyTorch internals to start contributing in PyTorch development!", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
-{"page_content": "---\nlayout: blog_detail\ntitle: 'PyTorch 1.6 released w/ Native AMP Support, Microsoft joins as maintainers for Windows'\nauthor: Team PyTorch\n---", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "Today, we\u2019re announcing the availability of PyTorch 1.6, along with updated domain libraries. We are also excited to announce the team at [Microsoft is now maintaining Windows builds and binaries](https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch) and will also be supporting the community on GitHub as well as the PyTorch Windows discussion forums.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "The PyTorch 1.6 release includes a number of new APIs, tools for performance improvement and profiling, as well as major updates to both distributed data parallel (DDP) and remote procedure call (RPC) based distributed training. \nA few of the highlights include:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "1. Automatic mixed precision (AMP) training is now natively supported and a stable feature (See [here](https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/) for more details) - thanks for NVIDIA\u2019s contributions; \n2. Native TensorPipe support now added for tensor-aware, point-to-point communication primitives built specifically for machine learning; \n3. Added support for complex tensors to the frontend API surface;", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "4. New profiling tools providing tensor-level memory consumption information;\n5. Numerous improvements and new features for both distributed data parallel (DDP) training and the remote procedural call (RPC) packages.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "Additionally, from this release onward, features will be classified as Stable, Beta and Prototype. Prototype features are not included as part of the binary distribution and are instead available through either building from source, using nightlies or via compiler flag. You can learn more about what this change means in the post [here](https://pytorch.org/blog/pytorch-feature-classification-changes/). You can also find the full release notes [here](https://github.com/pytorch/pytorch/releases).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "# Performance & Profiling", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "[Stable] Automatic Mixed Precision (AMP) Training", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "AMP allows users to easily enable automatic mixed precision training enabling higher performance and memory savings of up to 50% on Tensor Core GPUs. Using the natively supported `torch.cuda.amp` API, AMP provides convenience methods for mixed precision, where some operations use the `torch.float32 (float)` datatype and other operations use `torch.float16 (half)`. Some ops, like linear layers and convolutions, are much faster in `float16`. Other ops, like reductions, often require the dynamic range of", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "`float32`. Mixed precision tries to match each op to its appropriate datatype.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "* Design doc ([Link](https://github.com/pytorch/pytorch/issues/25081))\n* Documentation ([Link](https://pytorch.org/docs/stable/amp.html))\n* Usage examples ([Link](https://pytorch.org/docs/stable/notes/amp_examples.html))", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "[Beta] Fork/Join Parallelism \n\nThis release adds support for a language-level construct as well as runtime support for coarse-grained parallelism in TorchScript code. This support is useful for situations such as running models in an ensemble in parallel, or running bidirectional components of recurrent nets in parallel, and allows the ability to unlock the computational power of parallel architectures (e.g. many-core CPUs) for task level parallelism.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "Parallel execution of TorchScript programs is enabled through two primitives: `torch.jit.fork` and `torch.jit.wait`. In the below example, we parallelize execution of `foo`:\n\n```python\nimport torch\nfrom typing import List\n\ndef foo(x):\n return torch.neg(x)\n\n@torch.jit.script\ndef example(x):\n futures = [torch.jit.fork(foo, x) for _ in range(100)]\n results = [torch.jit.wait(future) for future in futures]\n return torch.sum(torch.stack(results))", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "print(example(torch.ones([])))\n ```\n \n* Documentation ([Link](https://pytorch.org/docs/stable/jit.html))", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "[Beta] Memory Profiler \n\nThe `torch.autograd.profiler` API now includes a memory profiler that lets you inspect the tensor memory cost of different operators inside your CPU and GPU models.\n\nHere is an example usage of the API:\n\n```python\nimport torch\nimport torchvision.models as models\nimport torch.autograd.profiler as profiler\n\nmodel = models.resnet18()\ninputs = torch.randn(5, 3, 224, 224)\nwith profiler.profile(profile_memory=True, record_shapes=True) as prof:\n model(inputs)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "# NOTE: some columns were removed for brevity\nprint(prof.key_averages().table(sort_by=\"self_cpu_memory_usage\", row_limit=10))\n# --------------------------- --------------- --------------- ---------------\n# Name CPU Mem Self CPU Mem Number of Calls\n# --------------------------- --------------- --------------- ---------------\n# empty 94.79 Mb 94.79 Mb 123\n# resize_ 11.48 Mb 11.48 Mb 2", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "# addmm 19.53 Kb 19.53 Kb 1\n# empty_strided 4 b 4 b 1\n# conv2d 47.37 Mb 0 b 20\n# --------------------------- --------------- --------------- ---------------", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "* PR ([Link](https://github.com/pytorch/pytorch/pull/37775))\n* Documentation ([Link](https://pytorch.org/docs/stable/autograd.html#profiler))\n\n# Distributed Training & RPC", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "[Beta] TensorPipe backend for RPC", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "PyTorch 1.6 introduces a new backend for the RPC module which leverages the TensorPipe library, a tensor-aware point-to-point communication primitive targeted at machine learning, intended to complement the current primitives for distributed training in PyTorch (Gloo, MPI, ...) which are collective and blocking. The pairwise and asynchronous nature of TensorPipe lends itself to new networking paradigms that go beyond data parallel: client-server approaches (e.g., parameter server for embeddings,", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "actor-learner separation in Impala-style RL, ...) and model and pipeline parallel training (think GPipe), gossip SGD, etc.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "```python\n# One-line change needed to opt in\ntorch.distributed.rpc.init_rpc(\n ...\n backend=torch.distributed.rpc.BackendType.TENSORPIPE,\n)\n\n# No changes to the rest of the RPC API\ntorch.distributed.rpc.rpc_sync(...)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "* Design doc ([Link](https://github.com/pytorch/pytorch/issues/35251))\n* Documentation ([Link](https://pytorch.org/docs/stable/rpc/index.html))", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "[Beta] DDP+RPC \n\nPyTorch Distributed supports two powerful paradigms: DDP for full sync data parallel training of models and the RPC framework which allows for distributed model parallelism. Previously, these two features worked independently and users couldn\u2019t mix and match these to try out hybrid parallelism paradigms.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "Starting in PyTorch 1.6, we\u2019ve enabled DDP and RPC to work together seamlessly so that users can combine these two techniques to achieve both data parallelism and model parallelism. An example is where users would like to place large embedding tables on parameter servers and use the RPC framework for embedding lookups, but store smaller dense parameters on trainers and use DDP to synchronize the dense parameters. Below is a simple code snippet. \n\n```python\n// On each trainer", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "remote_emb = create_emb(on=\"ps\", ...)\nddp_model = DDP(dense_model)\n\nfor data in batch:\n with torch.distributed.autograd.context():\n res = remote_emb(data)\n loss = ddp_model(res)\n torch.distributed.autograd.backward([loss])", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "* DDP+RPC Tutorial ([Link](https://pytorch.org/tutorials/advanced/rpc_ddp_tutorial.html))\n* Documentation ([Link](https://pytorch.org/docs/stable/rpc/index.html))\n* Usage Examples ([Link](https://github.com/pytorch/examples/pull/800))", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "[Beta] RPC - Asynchronous User Functions", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "RPC Asynchronous User Functions supports the ability to yield and resume on the server side when executing a user-defined function. Prior to this feature, when a callee processes a request, one RPC thread waits until the user function returns. If the user function contains IO (e.g., nested RPC) or signaling (e.g., waiting for another request to unblock), the corresponding RPC thread would sit idle waiting for these events. As a result, some applications have to use a very large number of threads and send", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "additional RPC requests, which can potentially lead to performance degradation. To make a user function yield on such events, applications need to: 1) Decorate the function with the `@rpc.functions.async_execution` decorator; and 2) Let the function return a `torch.futures.Future` and install the resume logic as callbacks on the `Future` object. See below for an example:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "```python\n@rpc.functions.async_execution\ndef async_add_chained(to, x, y, z):\n return rpc.rpc_async(to, torch.add, args=(x, y)).then(\n lambda fut: fut.wait() + z\n )\n\nret = rpc.rpc_sync(\n \"worker1\", \n async_add_chained, \n args=(\"worker2\", torch.ones(2), 1, 1)\n)\n \nprint(ret) # prints tensor([3., 3.])", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "* Tutorial for performant batch RPC using Asynchronous User Functions ([Link](https://github.com/pytorch/tutorials/blob/release/1.6/intermediate_source/rpc_async_execution.rst))\n* Documentation ([Link](https://pytorch.org/docs/stable/rpc.html#torch.distributed.rpc.functions.async_execution))\n* Usage examples ([Link](https://github.com/pytorch/examples/tree/master/distributed/rpc/batch))\n\n# Frontend API Updates", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "[Beta] Complex Numbers", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "The PyTorch 1.6 release brings beta level support for complex tensors including torch.complex64 and torch.complex128 dtypes. A complex number is a number that can be expressed in the form a + bj, where a and b are real numbers, and j is a solution of the equation x^2 = \u22121. Complex numbers frequently occur in mathematics and engineering, especially in signal processing and the area of complex neural networks is an active area of research. The beta release of complex tensors will support common PyTorch and", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "complex tensor functionality, plus functions needed by Torchaudio, ESPnet and others. While this is an early version of this feature, and we expect it to improve over time, the overall goal is provide a NumPy compatible user experience that leverages PyTorch\u2019s ability to run on accelerators and work with autograd to better support the scientific community.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "# Mobile Updates\n\nPyTorch 1.6 brings increased performance and general stability for mobile on-device inference. We squashed a few bugs, continued maintenance and added few new features while improving fp32 and int8 performance on a large variety of ML model inference on CPU backend.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "[Beta] Mobile Features and Performance \n\n* Stateless and stateful XNNPACK Conv and Linear operators\n* Stateless MaxPool2d + JIT optimization passes\n* JIT pass optimizations: Conv + BatchNorm fusion, graph rewrite to replace conv2d/linear with xnnpack ops, relu/hardtanh fusion, dropout removal\n* QNNPACK integration removes requantization scale constraint\n* Per-channel quantization for conv, linear and dynamic linear\n* Disable tracing for mobile client to save ~600 KB on full-jit builds", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "# Updated Domain Libraries", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "torchvision 0.7", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "torchvision 0.7 introduces two new pretrained semantic segmentation models, [FCN ResNet50](https://arxiv.org/abs/1411.4038) and [DeepLabV3 ResNet50](https://arxiv.org/abs/1706.05587), both trained on COCO and using smaller memory footprints than the ResNet101 backbone. We also introduced support for AMP (Automatic Mixed Precision) autocasting for torchvision models and operators, which automatically selects the floating point precision for different GPU operations to improve performance while maintaining", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "accuracy.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "* Release notes ([Link](https://github.com/pytorch/vision/releases))", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "torchaudio 0.6\n\ntorchaudio now officially supports Windows. This release also introduces a new model module (with wav2letter included), new functionals (contrast, cvm, dcshift, overdrive, vad, phaser, flanger, biquad), datasets (GTZAN, CMU), and a new optional sox backend with support for TorchScript.\n\n* Release notes ([Link](https://github.com/pytorch/audio/releases))\n\n# Additional updates", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "HACKATHON\n\nThe Global PyTorch Summer Hackathon is back! This year, teams can compete in three categories virtually:", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "1. **PyTorch Developer Tools:** Tools or libraries designed to improve productivity and efficiency of PyTorch for researchers and developers\n 2. **Web/Mobile Applications powered by PyTorch:** Applications with web/mobile interfaces and/or embedded devices powered by PyTorch \n 3. **PyTorch Responsible AI Development Tools:** Tools, libraries, or web/mobile apps for responsible AI development\n\nThis is a great opportunity to connect with the community and practice your machine learning skills.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "* [Join the hackathon](http://pytorch2020.devpost.com/)\n* [Watch educational videos](https://www.youtube.com/pytorch)", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "LPCV Challenge\n\nThe [2020 CVPR Low-Power Vision Challenge (LPCV) - Online Track for UAV video](https://lpcv.ai/2020CVPR/video-track) submission deadline is coming up shortly. You have until July 31, 2020 to build a system that can discover and recognize characters in video captured by an unmanned aerial vehicle (UAV) accurately using PyTorch and Raspberry Pi 3B+.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "Prototype Features\n\nTo reiterate, Prototype features in PyTorch are early features that we are looking to gather feedback on, gauge the usefulness of and improve ahead of graduating them to Beta or Stable. The following features are not part of the PyTorch 1.6 release and instead are available in nightlies with separate docs/tutorials to help facilitate early usage and feedback.", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "Distributed RPC/Profiler\nAllow users to profile training jobs that use `torch.distributed.rpc` using the autograd profiler, and remotely invoke the profiler in order to collect profiling information across different nodes. The RFC can be found [here](https://github.com/pytorch/pytorch/issues/39675) and a short recipe on how to use this feature can be found [here](https://github.com/pytorch/tutorials/tree/master/prototype_source).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "TorchScript Module Freezing\nModule Freezing is the process of inlining module parameters and attributes values into the TorchScript internal representation. Parameter and attribute values are treated as final value and they cannot be modified in the frozen module. The PR for this feature can be found [here](https://github.com/pytorch/pytorch/pull/32178) and a short tutorial on how to use this feature can be found [here](https://github.com/pytorch/tutorials/tree/master/prototype_source).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "Graph Mode Quantization", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "Eager mode quantization requires users to make changes to their model, including explicitly quantizing activations, module fusion, rewriting use of torch ops with Functional Modules and quantization of functionals are not supported. If we can trace or script the model, then the quantization can be done automatically with graph mode quantization without any of the complexities in eager mode, and it is configurable through a `qconfig_dict`. A tutorial on how to use this feature can be found", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "[here](https://github.com/pytorch/tutorials/tree/master/prototype_source).", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "Quantization Numerical Suite\nQuantization is good when it works, but it\u2019s difficult to know what's wrong when it doesn't satisfy the expected accuracy. A prototype is now available for a Numerical Suite that measures comparison statistics between quantized modules and float modules. This is available to test using eager mode and on CPU only with more support coming. A tutorial on how to use this feature can be found [here](https://github.com/pytorch/tutorials/tree/master/prototype_source).\n\n\nCheers!", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
-{"page_content": "Team PyTorch", "metadata": {"source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"}}
+{"page_content": "---\nlayout: blog_detail\ntitle: \"How Computational Graphs are Executed in PyTorch\"\nauthor: Preferred Networks\nfeatured-img: \"\"\n---\n\nWelcome to the last entry into understanding the autograd engine of PyTorch series!\nIf you haven\u2019t read parts [1](https://pytorch.org/blog/overview-of-pytorch-autograd-engine/) & [2](https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/) check them now to understand how PyTorch creates the computational graph for the backward pass!\n\nThis post is based on PyTorch v1.11, so some highlighted parts may differ across versions.\n\n# PyTorch autograd graph execution\n\nThe last post showed how PyTorch constructs the graph to calculate the outputs' derivatives w.r.t. the inputs when executing the forward pass. Now we will see how the execution of the backward pass is coordinated and done by looking at the whole process, starting from Python down to the lower C++ level internals.\n\n# What Happens when Calling `backward()`/`grad()` from Python\n## Using `variable.backward()`", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "After doing all our calculations with an input set to require the gradient, we call `.backward()` on the result to initiate the backward pass execution.\n\n```python\n>>> x = torch.tensor([0.5, 0.75], requires_grad=True)\n>>> y = torch.exp(x).sum()\n>>> y.backward()\n```\n\nCalling [`.backward()`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/_tensor.py#L307-L363) on a tensor results in a call to [`torch.autograd.backward()`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/autograd/__init__.py#L85-L175).\n```python\n# torch/_tensor.py\n\ndef backward(self, gradient=None, retain_graph=None, create_graph=False, inputs=None):\n \u2026\n torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)\n\n```\n`torch.autograd.backward()` checks the arguments and calls the autograd engine in the C++ layer.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "``` python\ndef backward(\n tensors: _TensorOrTensors,\n grad_tensors: Optional[_TensorOrTensors] = None,\n retain_graph: Optional[bool] = None,\n create_graph: bool = False,\n grad_variables: Optional[_TensorOrTensors] = None,\n inputs: Optional[_TensorOrTensors] = None,\n) -> None:\n \u2026\n\n if inputs is not None and len(inputs) == 0:\n raise RuntimeError(\"'inputs' argument to backward() cannot be empty.\")\n\n tensors = (tensors,) if isinstance(tensors, torch.Tensor) else tuple(tensors)\n inputs = (inputs,) if isinstance(inputs, torch.Tensor) else \\\n tuple(inputs) if inputs is not None else tuple()\n\n grad_tensors_ = _tensor_or_tensors_to_tuple(grad_tensors, len(tensors))\n grad_tensors_ = _make_grads(tensors, grad_tensors_)\n if retain_graph is None:\n retain_graph = create_graph", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Variable._execution_engine.run_backward(\n tensors, grad_tensors_, retain_graph, create_graph, inputs,\n allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag\n\n```\nFirst, whether the `grad_tensors` argument was specified or not, there is a call to the [`_make_grads`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/autograd/__init__.py#L30-L74) function. This is used to check the provided `grad_tensors` or to specify the default value for them by looking at the `tensors` argument values\u2019 shapes. Check the first blog post for details on the default value for the `grad_tensors` of the backward pass. This function just provides the vector of the vector jacobian product if it was not initially specified.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "In the above code, `Variable` has an `_execution_engine` attribute that is defined in [`torch.autograd.variable`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/autograd/variable.py#L14) to be of type `ImperativeEngine`; the C++ engine exported to python and declared in [`torch/csrc/autograd/python_engine.cpp`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/python_engine.cpp#L384). In the following sections, we explain in detail how this object executes the backward pass.\n\nNote that the `torch.autograd.backward` function has an `inputs` optional argument. This argument is used when we want to calculate the `.grad` field of only a subset of input tensors in the forward pass.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```python\n>>> x = torch.tensor([0.5, 0.75], requires_grad=True)\n>>> y = torch.tensor([0.1, 0.90], requires_grad=True)\n>>> z = torch.exp(x * y).sum()\n>>> torch.autograd.backward([z], inputs=[x])\n>>> x.grad\ntensor([0.1051, 1.7676])\n>>> y.grad # None\n>>>\n\n```\n## Using `torch.autograd.grad`\n\nAn alternative to `backward()` is to use [`torch.autograd.grad()`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/autograd/__init__.py#L177-L277). The main difference to `backward()` is that `grad()` returns a tuple of tensors with the gradients of the `outputs` w.r.t. the `inputs` kwargs instead of storing them in the `.grad` field of the tensors. As you can see, the `grad()` code shown below is very similar to backward.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```python\ndef grad(\n outputs: _TensorOrTensors,\n inputs: _TensorOrTensors,\n grad_outputs: Optional[_TensorOrTensors] = None,\n retain_graph: Optional[bool] = None,\n create_graph: bool = False,\n only_inputs: bool = True,\n allow_unused: bool = False,\n is_grads_batched: bool = False\n) -> Tuple[torch.Tensor, ...]:\n \n outputs = (outputs,) if isinstance(outputs, torch.Tensor) else tuple(outputs)\n inputs = (inputs,) if isinstance(inputs, torch.Tensor) else tuple(inputs)\n overridable_args = outputs + inputs\n if has_torch_function(overridable_args):\n return handle_torch_function(\n grad,\n overridable_args,\n outputs,\n inputs,\n grad_outputs=grad_outputs,\n retain_graph=retain_graph,\n create_graph=create_graph,\n only_inputs=only_inputs,\n allow_unused=allow_unused,\n )", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "grad_outputs_ = _tensor_or_tensors_to_tuple(grad_outputs, len(outputs))\n grad_outputs_ = _make_grads(outputs, grad_outputs_)\n\n if retain_graph is None:\n retain_graph = create_graph\n\n if is_grads_batched:\n # \u2026. It will not be covered here\n else:\n return Variable._execution_engine.run_backward(\n outputs, grad_outputs_, retain_graph, create_graph, inputs,\n allow_unused, accumulate_grad=False) # Calls into the C++ engine to run the backward pass\n\n```\n\nFigure 1 shows the computational graph with the `backward()` and `grad()` arguments highlighted in red and blue, respectively:\n\n\n
\n
\n\n\nFgiure 1: Correspondence of `backward`/`grad` arguments in the graphs.\n
\n\n# Going Inside the Autograd Engine\n\n## Refreshing Concepts: Nodes and Edges", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "As we saw in [2](https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/)\nThe computational graph comprises `Node` and `Edge` objects. Please read that post if you haven\u2019t done it yet.\n\n### Nodes\n\n`Node` objects are defined in [`torch/csrc/autograd/function.h`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/function.h#L105-L176), and they provide an overload of `operator()` for the associated function and a list of edges to do the graph traversal. Note that `Node` is a base class that autograd functions inherit from and override the `apply` method to execute the backward function.\n```c++\nstruct TORCH_API Node : std::enable_shared_from_this {\n ...\n /// Evaluates the function on the given inputs and returns the result of the\n /// function call.\n variable_list operator()(variable_list&& inputs) {\n ...\n }", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "protected:\n /// Performs the `Node`'s actual operation.\n virtual variable_list apply(variable_list&& inputs) = 0;\n \u2026\n edge_list next_edges_;\n uint64_t topological_nr_ = 0;\n \u2026\n\n```", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```\n\nThere is an attribute called [`topological_nr_`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/function.h#L481) in every node object. This number is used to optimize the graph execution as it allows to discard of graph branches under certain conditions. The topological number is the longest distance between this node and any leaf node and it is shown in Figure 2. Its main property is that for any pair of nodes `x`, `y` in a directed graph `topo_nr(x) < topo_nr(y)` means that there is no path from `x` to `y`. So this allows for reducing the number of paths in the graph in need of traversal. Check the [topological_nr](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/function.h#L314-L343)\n) method comment for further details.\n\n\n
\n
\n\n\nFigure 2: Example of the Topological Number calculation\n
", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "### Edges\n\nThe [`Edge`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/edge.h#L14-L39) object links `Node`s together, and its implementation is straightforward.\n\n```c++\nstruct Edge {\n ...\n /// The function this `Edge` points to.\n std::shared_ptr function;\n /// The identifier of a particular input to the function.\n uint32_t input_nr;\n};\n\n```\n\nIt only requires a function pointer to the `Node` and an input number that is the index of the output from the forward function this edge points to. When preparing the set of gradients before calling \"function\", we know that what is flowing from this edge should be accumulated in the \"input_nr\"th argument. Note that the input/output name is flipped here and this is the input to the backward function.\n `Edge` objects are constructed using the [`gradient_edge`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/variable.cpp#L221-L233) function method.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```c++\n Edge gradient_edge(const Variable& self) {\n if (const auto& gradient = self.grad_fn()) {\n return Edge(gradient, self.output_nr());\n } else {\n return Edge(grad_accumulator(self), 0);\n }\n }\n\n```\n## Entering the C++ Realm", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Once that `torch.autograd.backward()` has been invoked, the\n[`THPEngine_run_backward`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/python_engine.cpp#L152-L286) routine starts the graph traversal. Following is a schema of the function body:\n```c++\nPyObject *THPEngine_run_backward(PyObject *self, PyObject *args, PyObject *kwargs)\n{\n HANDLE_TH_ERRORS\n PyObject *tensors = nullptr;\n PyObject *grad_tensors = nullptr;\n unsigned char keep_graph = 0;\n unsigned char create_graph = 0;\n PyObject *inputs = nullptr;\n \n // Convert the python arguments to C++ objects\n const char *accepted_kwargs[] = { // NOLINT\n \"tensors\", \"grad_tensors\", \"keep_graph\", \"create_graph\", \"inputs\",\n \"allow_unreachable\", \"accumulate_grad\", nullptr\n };\n if (!PyArg_ParseTupleAndKeywords(args, kwargs, \"OObb|Obb\", (char**)accepted_kwargs,\n &tensors, &grad_tensors, &keep_graph, &create_graph, &inputs, &allow_unreachable, &accumulate_grad))", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "// Prepare arguments\n for(const auto i : c10::irange(num_tensors)) {\n // Check that the tensors require gradients\n }\n\n std::vector output_edges;\n if (inputs != nullptr) {\n // Prepare outputs\n }\n\n {\n // Calls the actual autograd engine\n pybind11::gil_scoped_release no_gil;\n outputs = engine.execute(roots, grads, keep_graph, create_graph, accumulate_grad, output_edges);\n }\n // Clean up and finish\n}\n\n```\n\nFirst, we prepare the input arguments after converting the `PyObject` arguments to actual C++ objects. The `tensors` list contains the tensors from which we start the backward pass. These tensors are converted to edges using `torch::autograd::impl::gradient_edge` and added to a list called `roots` where the graph traversal starts.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```c++\n edge_list roots;\n roots.reserve(num_tensors);\n variable_list grads;\n grads.reserve(num_tensors);\n for(const auto i : c10::irange(num_tensors)) {\n PyObject *_tensor = PyTuple_GET_ITEM(tensors, i);\n const auto& variable = THPVariable_Unpack(_tensor);\n auto gradient_edge = torch::autograd::impl::gradient_edge(variable);\n roots.push_back(std::move(gradient_edge));\n\n PyObject *grad = PyTuple_GET_ITEM(grad_tensors, i);\n if (THPVariable_Check(grad)) {\n const Variable& grad_var = THPVariable_Unpack(grad);\n grads.push_back(grad_var);\n } \n }\n\n```\n\nNow, if the `inputs` argument was specified in `backward` or we used the `torch.autograd.grad` api, the following code creates a list of edges to accumulate the gradients in the specified tensors at the end of the computation. The engine uses this later to optimize the execution as it doesn\u2019t add the gradients in all the leaf nodes, just the specified ones.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```c++\n std::vector output_edges;\n if (inputs != nullptr) {\n int num_inputs = PyTuple_GET_SIZE(inputs);\n output_edges.reserve(num_inputs);\n for (const auto i : c10::irange(num_inputs)) {\n PyObject *input = PyTuple_GET_ITEM(inputs, i);\n const auto& tensor = THPVariable_Unpack(input);\n const auto output_nr = tensor.output_nr();\n auto grad_fn = tensor.grad_fn();\n if (!grad_fn) {\n grad_fn = torch::autograd::impl::try_get_grad_accumulator(tensor);\n }\n if (accumulate_grad) {\n tensor.retain_grad();\n }\n if (!grad_fn) {\n output_edges.emplace_back(std::make_shared(), 0);\n } else {\n output_edges.emplace_back(grad_fn, output_nr);\n }\n }\n }\n\n```\n\nThe next step is the actual graph traversal and node function execution, and finally, the cleanup and return.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```c++\n {\n // Calls the actual autograd engine\n pybind11::gil_scoped_release no_gil;\n auto& engine = python::PythonEngine::get_python_engine();\n outputs = engine.execute(roots, grads, keep_graph, create_graph, accumulate_grad, output_edges);\n }\n // Clean up and finish\n}\n\n```\n\n# Starting the Real Execution\n\n`engine.execute`is present in [torch/csrc/autograd/engine.cpp](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L969-L1044) \n\nThere are two differentiated steps here:\n\nAnalyze the graph to find dependencies between functions\nCreate worker threads that traverse the graph\n\n## Data Structures Used for the Execution\n\n### GraphTask\n\nAll the execution metadata is managed by the [`GraphTask`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.h#L51-L196) class in [torch/csrc/autograd/engine.h](https://github.com/pytorch/pytorch/blob/release/1.11/torch/csrc/autograd/engine.h)", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```c++\nstruct GraphTask: std::enable_shared_from_this {\n std::atomic outstanding_tasks_{0};\n // \u2026 \n std::unordered_map not_ready_;\n std::unordered_map dependencies_;\n\n struct ExecInfo {\n // \u2026\n };\n std::unordered_map exec_info_;\n std::vector captured_vars_;\n // \u2026\n std::shared_ptr cpu_ready_queue_;\n};\n\n```\n\nHere we see a series of variables dedicated to maintaining the execution state.\n`outstanding_tasks_` tracks the number of tasks left to be executed for the backward pass to complete. `not_ready_` holds the input arguments for the `Node`s that are not ready to be executed. `dependencies_` track the number of predecessors that a `Node` has. As the count reaches `0`, the `Node` is ready for execution; it is placed in a ready queue to be retrieved and executed later.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "`exec_info_` and the associated `ExecInfo` struct are used only when the `inputs` argument is specified or it is a call to `autograd.grad()`. They allow filter paths on the graph that are not needeed since only the gradients are calculated only for the variables in the `inputs` list.\n\n `captured_vars_` is where the results of the graph execution are temporarily stored if we used the `torch.autograd.grad()` api instead of `torch.autograd.backward()` since `grad()` returns the gradients as tensors instead of just filling the `.grad` field of the inputs.\n\n\n### NodeTask", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "### NodeTask\n\nThe [`NodeTask`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.h#L210-L242) struct is a basic class that holds an `fn_` pointer to the node to execute, and an `inputs_` buffer to store the input arguments to this function. Note that the functions executed by the backward pass are the derivatives specified in the `derivatives.yaml` file. or the user provided backward function when using custom functions as described in the second blog post.\n\nThe `inputs_` buffer is also where the output gradients of the previously executed functions are aggregated, and it is defined as a [`std::vector` container](https://github.com/pytorch/pytorch/blob/release/1.10/torch/csrc/autograd/input_buffer.h) with facilities to accumulate values at a given position.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```c++\nstruct NodeTask {\n std::weak_ptr base_;\n std::shared_ptr fn_;\n // This buffer serves as an implicit \"addition\" node for all of the\n // gradients flowing here. Once all the dependencies are finished, we\n // use the contents of this buffer to run the function.\n InputBuffer inputs_;\n};\n\n```\n### GraphRoot\n\nThe [`GraphRoot`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/functions/basic_ops.h#L72-L89) is a special function used to hold multiple input variables in a single place. The code is pretty simple as it only acts as a container of variables.\n\n```c++\nstruct TORCH_API GraphRoot : public Node {\n GraphRoot(edge_list functions, variable_list inputs)\n : Node(std::move(functions)),\n outputs(std::move(inputs)) {\n for (const auto& t : outputs) {\n add_input_metadata(t);\n }\n }\n\n variable_list apply(variable_list&& inputs) override {\n return outputs;\n }\n\n```\n\n### AccumulateGrad", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "### AccumulateGrad\n\nThis function is set during the graph creation in `gradient_edge` when the `Variable` object doesn\u2019t have a `grad_fn`. This is, it is a leaf node.\n\n```c++\n if (const auto& gradient = self.grad_fn()) {\n // \u2026\n } else {\n return Edge(grad_accumulator(self), 0);\n }\n\n```\n\nThe function body is defined in [`torch/csrc/autograd/functions/accumulate_grad.cpp`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/functions/accumulate_grad.cpp#L25-L63) and it essentially accumulates the input grads in the object\u2019s `.grad` attribute.\n\n```c++\nauto AccumulateGrad::apply(variable_list&& grads) -> variable_list {\n check_input_variables(\"AccumulateGrad\", grads, 1, 0);\n \u2026\n\n at::Tensor new_grad = callHooks(variable, std::move(grads[0]));\n std::lock_guard lock(mutex_);", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "at::Tensor& grad = variable.mutable_grad();\n accumulateGrad(\n variable,\n grad,\n new_grad,\n 1 + !post_hooks().empty() /* num_expected_refs */,\n [&grad](at::Tensor&& grad_update) { grad = std::move(grad_update); });\n return variable_list();\n}\n}} // namespace torch::autograd\n\n\n\n```\n\n[`accumulateGrad`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/functions/accumulate_grad.h#L100)\ndoes several checks on the tensors format and eventually performs the `variable_grad += new_grad;` accumulation.\n\n## Preparing the graph for execution\n\nNow, let\u2019s walk through [`Engine::execute`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L969-L1126). The first thing to do besides arguments consistency checks is to create the actual `GraphTask` object we described above. This object keeps all the metadata of the graph execution.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```c++\nauto Engine::execute(const edge_list& roots,\n const variable_list& inputs,\n bool keep_graph,\n bool create_graph,\n bool accumulate_grad,\n const edge_list& outputs) -> variable_list {\n\n validate_outputs(roots, const_cast(inputs), [](const std::string& msg) {\n return msg;\n });\n\n // Checks\n\n auto graph_task = std::make_shared(\n /* keep_graph */ keep_graph,\n /* create_graph */ create_graph,\n /* depth */ not_reentrant_backward_call ? 0 : total_depth + 1,\n /* cpu_ready_queue */ local_ready_queue);\n\n // If we receive a single root, skip creating extra root node\n // \u2026\n // Prepare graph by computing dependencies\n // \u2026\n // Queue the root \n // \u2026\n // launch execution\n // \u2026\n}\n\n```", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```\n\nAfter creating the `GraphTask`, we use its associated function if we only have one root node. If we have multiple root nodes, we create a special `GraphRoot` object as described before.\n\n```c++\n bool skip_dummy_node = roots.size() == 1;\n auto graph_root = skip_dummy_node ?\n roots.at(0).function :\n std::make_shared(roots, inputs);\n\n```\n\nThe next step is to fill the `dependencies_` map in the `GraphTask` object since the engine must know when it can execute a task. The `outputs` here is the `inputs` argument passed to the `torch.autograd.backward()` call in Python. But here, we have reversed the names since the gradients w.r.t. the inputs of the forward pass are now the outputs of the backward pass. And from now on, there is no concept of forward/backward, but only graph traversal and execution.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```c++\n auto min_topo_nr = compute_min_topological_nr(outputs);\n // Now compute the dependencies for all executable functions\n compute_dependencies(graph_root.get(), *graph_task, min_topo_nr);\n\n if (!outputs.empty()) {\n graph_task->init_to_execute(*graph_root, outputs, accumulate_grad, min_topo_nr);\n }\n\n```\n\nHere we preprocess the graph for the execution of the nodes. First, [`compute_min_topological_nr`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L922-L933) is called to to obtain the minimum topological number of the tensors specified in `outputs` (0 if no `inputs` kwarg was supplied to `.backward` or `input` for `.grad`). This computation prunes paths in the graph that lead to input variables of which we don\u2019t want/need to calculate the grads.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Second, is the [`compute_dependencies`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L935-L967) call. This function is a very simple graph traversal that starts with the root `Node`, and for each of the edges in `node.next_edges()` it increments the counter in `dependencies_`. Figure 3 shows the result of the dependencies calculation for the example graph. Note that the number of dependencies of any node is just the number of edges arriving at it.\n\n\n
\n
\n\n\nFigure 3: Number of dependencies for each node\n
", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Finally, the [`init_to_execute`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L1281-L1383) call, this is the one that populates the `GraphTask::exec_info_` map in case that `inputs` were specified in the python `backward` call. It iterates the graph again, starting from the root, and records in the `exec_info_` map the intermediate nodes needed to calculate only the given `inputs` gradients.\n\n```c++\n // Queue the root\n if (skip_dummy_node) {\n InputBuffer input_buffer(roots.at(0).function->num_inputs());\n auto input = inputs.at(0);\n\n\n input_buffer.add(roots.at(0).input_nr,\n std::move(input),\n input_stream,\n opt_next_stream);", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "execute_with_graph_task(graph_task, graph_root, std::move(input_buffer));\n } else {\n execute_with_graph_task(graph_task, graph_root, InputBuffer(variable_list()));\n }\n // Avoid a refcount bump for the Future, since we check for refcount in\n // DistEngine (see TORCH_INTERNAL_ASSERT(futureGrads.use_count() == 1)\n // in dist_engine.cpp).\n auto& fut = graph_task->future_result_;\n fut->wait();\n return fut->value().toTensorVector();\n}\n\n```\n\nAnd now, we are ready to start the actual execution by creating the `InputBuffer`. In case we only have one root variable, we begin by copying the value of the `inputs` tensor (this is the `gradients` passed to python `backward`) in position 0 of the input_buffer. This is a small optimization that avoids running the `RootNode` for no reason. Also, if the rest of the graph is not on the cpu, we directly start on that worker while the `RootNode` is always placed on the cpu ready queue. Details of the workers and ready queues are explained in the section below.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "On the other hand, if we have multiple roots, the `GraphRoot` object also holds the inputs, so it is enough to pass it an empty `InputBuffer`.\n\n## Graph Traversal and Node Execution\n### Devices, Threads and Queues\n\nBefore diving into the actual execution, we need to see how the engine is structured.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "First of all, the engine is multithreaded with one thread per device. For example, the caller thread is associated with the CPU while additional threads are created and associated with each GPU or other devices available in the system. Each thread tracks its device using thread-local storage in the [`worker_device`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L69) variable. In addition, the threads have a queue of tasks to be executed also located in thread-local storage, the [`local_ready_queue`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L103-L104). This is where work is queued for this thread to execute in the `thread_main` function that is explained later.\nYou will wonder how the device where a task should be executed is decided. The `InputBuffer` class has a [`device()`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/input_buffer.cpp#L173-L189) function that returns the first non-cpu device of all its tensors.\nThis function is used together with [`Engine::ready_queue`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L1181-L1190) to select the queue to queue a task.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```c++\nauto Engine::ready_queue(std::shared_ptr cpu_ready_queue, at::Device device) -> std::shared_ptr{\n if (device.type() == at::kCPU || device.type() == at::DeviceType::Meta) {\n return cpu_ready_queue;\n } else {\n // See Note [Allocating GPUs to autograd threads]\n return device_ready_queues_.at(device.index());\n }\n}\n\n```", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```\n\nThe [`ReadyQueue`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.h#L245-L283) object is defined in `torch/csrc/autograd/engine.h` and it is a simple wrapper over `std::priority_queue` that allows a thread to [wait for a task](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L219) if it\u2019s empty. One interesting property of the `ReadyQueue` is that it increases the [`GraphTask::outstanding_tasks_`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L195) value used to determine if the execution has completed or not.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```c++\nauto ReadyQueue::push(NodeTask item, bool incrementOutstandingTasks) -> void {\n {\n std::lock_guard lock(mutex_);\n if (incrementOutstandingTasks) {\n std::shared_ptr graph_task = item.base_.lock();\n ++graph_task->outstanding_tasks_;\n }\n heap_.push(std::move(item));\n }\n not_empty_.notify_one();\n}\n\nauto ReadyQueue::pop() -> NodeTask {\n std::unique_lock lock(mutex_);\n not_empty_.wait(lock, [this]{ return !heap_.empty(); });\n auto task = std::move(const_cast(heap_.top())); heap_.pop();\n return task;\n}\n\n```\n\n### Reentrant Backward\n\nA reentrant backward happens when one of the tasks in a backward pass calls again `backward`. It is not a very common case, but it can be used to reduce memory utilization as it could potentially avoid saving intermediate results. For more information, check this [PyTorch forum post](https://discuss.pytorch.org/t/what-is-the-scenario-of-reentrant-backwards-in-pytorch-source-code/19330/2).", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```python\nclass ReentrantBackward(torch.autograd.Function):\n @staticmethod\n def forward(ctx, input):\n return input.sum()\n\n @staticmethod\n def backward(ctx, input):\n # Let's compute the backward by using autograd\n input = input.detach().requires_grad_()\n with torch.enable_grad():\n out = input.sum()\n out.backward() # REENTRANT CALL!!\n return out.detach()\n\n```\n\nHere, we call `backward()` inside `backward()` for a user custom-defined autograd function.\nThis situation can lead to deadlocks because the first backward needs to wait for the second one to complete. But some internal implementation details can prevent the second backward from completing as it is explained in the dedicated subsection.\n## Thread Initialization", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "[`execute_with_graph_task`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L1054-L1126) is in charge of initializing the threads taking care of the computation and placing the `root` node in the queue of the device that produced it.\n\n```c++\nc10::intrusive_ptr Engine::execute_with_graph_task(\n const std::shared_ptr& graph_task,\n std::shared_ptr graph_root,\n InputBuffer&& input_buffer) {\n\n initialize_device_threads_pool();\n // Lock mutex for GraphTask.\n std::unique_lock lock(graph_task->mutex_);\n\n auto queue = ready_queue(graph_task->cpu_ready_queue_, input_buffer.device());", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "if (worker_device == NO_DEVICE) {\n set_device(CPU_DEVICE);\n graph_task->owner_ = worker_device;\n queue->push(NodeTask(graph_task, std::move(graph_root), std::move(input_buffer)));\n lock.unlock();\n thread_main(graph_task);\n worker_device = NO_DEVICE;\n } else {\n // This deals with reentrant backwards, we will see it later.\n }\n return graph_task->future_result_;\n}\n\n```\n\nFirst, this function initializes several threads (one per device) calling [` initialize_device_threads_pool()`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L1046-L1052) where several things happen:\nOne `ReadyQueue` per device is created.\nOne thread per non-cpu device is created.\nA thread local `worker_device` variable is set to track the current device associated with the thread.\n`thread_main` function is called, and threads wait for tasks to be put in their queues.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "Then it retrieves the queue to place the root node based on the device that holds the tensors present in the `input_buffer` using the `ready_queue` function. Now, the main thread (the one also executing the Python interpreter) has its `worker_device` set to `NO_DEVICE`, and it is in charge of executing functions with all its tensors living in the cpu. If `worker_device` is set to any other value, the graph execution is already started, and `.backward()` was called inside a running `Node`, creating a reentrant backward call. This is explained later. For now, \nthe main thread places the task in the queue and call `thread_main`.\n## Where the Magic Happens\n\nIt\u2019s been a long way, but finally, we are ready to traverse the graph and execute the nodes. Each of the spawned threads, and the main thread call [`thread_main`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L377-L464).", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```c++\nauto Engine::thread_main(const std::shared_ptr& graph_task) -> void {\n\n while (graph_task == nullptr || !graph_task->future_result_->completed()) {\n std::shared_ptr local_graph_task;\n {\n NodeTask task = local_ready_queue->pop();\n\n if (task.isShutdownTask_) {\n break;\n }\n\n if (!(local_graph_task = task.base_.lock())) {\n // GraphTask for function is no longer valid, skipping further\n // execution.\n continue;\n }\n\n if (task.fn_ && !local_graph_task->has_error_.load()) {\n at::ThreadLocalStateGuard tls_guard(local_graph_task->thread_locals_);", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "try {\n GraphTaskGuard guard(local_graph_task);\n NodeGuard ndguard(task.fn_);\n {\n evaluate_function(\n local_graph_task,\n task.fn_.get(),\n task.inputs_,\n local_graph_task->cpu_ready_queue_);\n }\n } catch (std::exception& e) {\n thread_on_exception(local_graph_task, task.fn_, e);\n }\n }\n }\n\n // Decrement the outstanding tasks.\n --local_graph_task->outstanding_tasks_;\n\n // Check if we've completed execution.\n if (local_graph_task->completed()) {\n local_graph_task->mark_as_completed_and_run_post_processing();\n auto base_owner = local_graph_task->owner_;\n if (worker_device != base_owner) {\n std::atomic_thread_fence(std::memory_order_release);\n ready_queue_by_index(local_graph_task->cpu_ready_queue_, base_owner)\n ->push(NodeTask(local_graph_task, nullptr, InputBuffer(0)));\n }\n }\n }\n}\n\n```", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```\n\nThe code here is simple, given the `local_ready_queue` assigned to each thread in thread-local storage. The threads loop until there are no tasks left to execute in the graph. Note that for device-associated threads, the passed `graph_task` argument is [`nullptr`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L326-L327), and they block in `local_ready_queue->pop()` until a task is pushed in their queue. After some consistency checks (the task type is shutdown, or the graph is still valid). We get to the actual function invocation in `evaluate_function`.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```c++\n try {\n GraphTaskGuard guard(local_graph_task);\n NodeGuard ndguard(task.fn_);\n {\n evaluate_function(\n local_graph_task,\n task.fn_.get(),\n task.inputs_,\n local_graph_task->cpu_ready_queue_);\n }\n } catch (std::exception& e) {\n thread_on_exception(local_graph_task, task.fn_, e);\n }\n }\n\n```", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```\n\nAfter calling `evaluate_function`, we check if the `graph_task` execution is complete by looking the `outstanding_tasks_` number. This number increases when a task is pushed to a queue and is decreased in `local_graph_task->completed()` when a task is executed. When the execution is done, we return the results that are be in the `captured_vars_` in case we called `torch.autograd.grad()` instead of `torch.autograd.backward()` as this function returns tensors instead of storing them in the `.grad` attribute of the inputs. Finally we wake up the main thread if it\u2019s waiting by sending a dummy task.\n\n```c++\n // Decrement the outstanding tasks.\n --local_graph_task->outstanding_tasks_;", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "// Check if we've completed execution.\n if (local_graph_task->completed()) {\n local_graph_task->mark_as_completed_and_run_post_processing();\n auto base_owner = local_graph_task->owner_;\n if (worker_device != base_owner) {\n std::atomic_thread_fence(std::memory_order_release);\n ready_queue_by_index(local_graph_task->cpu_ready_queue_, base_owner)\n ->push(NodeTask(local_graph_task, nullptr, InputBuffer(0)));\n }\n }\n\n```\n\n## Calling the Function and Unlocking New Tasks\n\n[`evaluate_function`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L786-L920) serves three purposes:\n\nRun the function.\nAccumulate its results in the next node `InputBuffers`.\nDecrease the dependencies counter of the next nodes and enqueues the tasks reaching 0 to be executed.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```c++\nvoid Engine::evaluate_function(\n std::shared_ptr& graph_task,\n Node* func,\n InputBuffer& inputs,\n const std::shared_ptr& cpu_ready_queue) {\n\n // If exec_info_ is not empty, we have to instrument the execution\n auto& exec_info_ = graph_task->exec_info_;\n if (!exec_info_.empty()) {\n // Checks if the function needs to be executed \n if (!fn_info.needed_) {\n // Skip execution if we don't need to execute the function.\n return;\n }\n }\n\n auto outputs = call_function(graph_task, func, inputs);\n\n auto& fn = *func;\n if (!graph_task->keep_graph_) {\n fn.release_variables();\n }\n\n```\n\nInitially, we check the `exec_info_` map of the `GraphTask` structure to determine if the current node needs to be executed. Remember that if this map is empty, all the nodes are executed because we are calculating the grads for all the inputs of the forward pass.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "After this check, the function is executed by running [`call_function`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L735-L784). Its implementation is very straightforward and calls the actual derivative function and registered hooks if any.\n\n```c++\n int num_outputs = outputs.size();\n if (num_outputs == 0) {\n // Records leaf stream (if applicable)\n return;\n }\n\n if (AnomalyMode::is_enabled()) {\n // check for nan values in result\n }\n\n```\n\nNext, we check the outputs of the function after `call_function` is done. If the number of outputs is 0, there are no following nodes to be executed so we can safely return. This is the case of the `AccumulateGrad` node associated with the leaf nodes.\n\n Also, the check for `NaN` values in the gradients is done here if requested.\n```c++", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "std::lock_guard lock(graph_task->mutex_);\n for (const auto i : c10::irange(num_outputs)) {\n auto& output = outputs[i];\n const auto& next = fn.next_edge(i);\n\n if (!next.is_valid()) continue;\n\n \n\n```\n\nWe have now executed a `grad_fn` that has returned one gradient per each of the associated forward pass function inputs. As we saw in the [previous blog post](https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/#linking-nodes-together), we have an `Edge` object per each of these input tensors, and the `grad_fn` of the function producing them in the forward pass. Essentially, Output[0] of the node in the backward pass, corresponds to the first argument of the forward pass associated function. Figure 4 shows how the outputs of a backward function are related to the inputs of the forward function. See that the outputs of `grad_fn C` are the gradients of `z` w.r.t. the inputs of `Function C`", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "\n
\n
\n\n\nFigure 4: Correspondence between forward and backward functions inputs and outputs\n
\n\nWe now iterate through these edges and check if the associated functions are ready to be executed.\n\n```c++\n // Check if the next function is ready to be computed\n bool is_ready = false;\n auto& dependencies = graph_task->dependencies_;\n auto it = dependencies.find(next.function.get());\n\n if (it == dependencies.end()) {\n auto name = next.function->name();\n throw std::runtime_error(std::string(\"dependency not found for \") + name);\n } else if (--it->second == 0) {\n dependencies.erase(it);\n is_ready = true;\n }\n\n auto& not_ready = graph_task->not_ready_;\n auto not_ready_it = not_ready.find(next.function.get());\n\n```", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```\n\nFor this, we check the `graph_task->dependencies_` map. We decrement the counter, and if it reaches 0, we mark the function pointed by the edge ready to be executed. Following, we prepare the input buffers of the tasks indicated by the next edges.\n\n```c++\n if (not_ready_it == not_ready.end()) {\n if (!exec_info_.empty()) {\n // Skip functions that aren't supposed to be executed\n }\n\n // Creates an InputBuffer and moves the output to the corresponding input position\n InputBuffer input_buffer(next.function->num_inputs());\n input_buffer.add(next.input_nr,\n std::move(output),\n opt_parent_stream,\n opt_next_stream);\n\n if (is_ready) {\n auto queue = ready_queue(cpu_ready_queue, input_buffer.device());\n queue->push(\n NodeTask(graph_task, next.function, std::move(input_buffer)));\n } else {\n not_ready.emplace(next.function.get(), std::move(input_buffer));\n }\n\n```", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```\n\nHere, we look for the task in the `graph_task->not_ready_` map. If it is not present, we create a new `InputBuffer` object and set the current output in the `input_nr` position of the buffer associated with the edge. If the task is ready to be executed, we enqueue it in the appropriate device `ready_queue` and complete the execution. However, if the task is not ready and we have seen it before, it is present in the `not_ready_map_`.\n\n```c++\n } else {\n // The function already has a buffer\n auto &input_buffer = not_ready_it->second;\n // Accumulates into buffer\n input_buffer.add(next.input_nr,\n std::move(output),\n opt_parent_stream,\n opt_next_stream);\n if (is_ready) {\n auto queue = ready_queue(cpu_ready_queue, input_buffer.device());\n queue->push(NodeTask(graph_task, next.function, std::move(input_buffer)));\n not_ready.erase(not_ready_it);\n }\n }\n }\n}\n\n```", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```\n\nIn this case, we accumulate the output in the existing `input_buffer` instead of creating a new one. Once all the tasks are processed, the worker thread exits the loop and complete.\nAll this process is summarized in the animation in Figure 5. We see how a thread peeks at the tasks in the ready queue and decrements the next nodes' dependencies, unlocking them for execution.\n\n\n
\n
\n\n\nFigure 5: Animation of the execution of the computational graph\n
\n\n## Flow with Reentrant Backward\n\nAs we saw above, the reentrant backward problem is when the currently executed function does a nested call to `backward`. When this happens, the thread running this function goes all the way down to `execute_with_graph_task` as in the non-reentrant case, but here is when things are different.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```c++\nc10::intrusive_ptr Engine::execute_with_graph_task(\n const std::shared_ptr& graph_task,\n std::shared_ptr graph_root,\n InputBuffer&& input_buffer) {\n\n initialize_device_threads_pool();\n // Lock mutex for GraphTask.\n std::unique_lock lock(graph_task->mutex_);\n\n auto queue = ready_queue(graph_task->cpu_ready_queue_, input_buffer.device());\n\n if (worker_device == NO_DEVICE) {\n //Regular case\n } else {\n // If worker_device is any devices (i.e. CPU, CUDA): this is a re-entrant\n // backward call from that device.\n graph_task->owner_ = worker_device;\n\n // Now that all the non-thread safe fields of the graph_task have been populated,\n // we can enqueue it.\n queue->push(NodeTask(graph_task, std::move(graph_root), std::move(input_buffer)));", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "if (current_depth >= max_recursion_depth_) {\n // If reached the max depth, switch to a different thread\n add_thread_pool_task(graph_task);\n } else {\n ++total_depth;\n ++current_depth;\n lock.unlock();\n thread_main(graph_task);\n --current_depth;\n --total_depth;\n }\n }\n return graph_task->future_result_;\n}\n\n```", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```\n\nHere, `execute_with_graph_task` detects this as a reentrant call and then looks for the current number of nested calls. If it exceeds the limit, we create a new thread to take care of the execution of this graph, and if not, we execute this reentrant call regularly.\nThe limit of nested calls was originally set to avoid stack overflow due to reentrant calls creating very large call stacks. However, the number was further reduced when sanitizer tests were added because of the maximum amount of locks a thread can hold at a given moment. This can be seen in [`torch/csrc/autograd/engine.h`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.h#L36-L42).\n\n\nWhen this maximum depth is exceeded, a new thread is created with the [`add_thread_pool_task`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L1239-L1255) function.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```c++\nvoid Engine::add_thread_pool_task(const std::weak_ptr& graph_task) {\n std::unique_lock lck(thread_pool_shared_->mutex_);\n // if we have pending graph_task objects to be processed, create a worker.\n bool create_thread = (thread_pool_shared_->num_workers_ <= thread_pool_shared_->graphtasks_queue_.size());\n thread_pool_shared_->graphtasks_queue_.push(graph_task);\n\n\n lck.unlock();\n if (create_thread) {\n std::thread t(&Engine::reentrant_thread_init, this);\n t.detach();\n }\n\n thread_pool_shared_->work_.notify_one();\n}\n\n\n\n```\n\nBefore going in-depth, let's look at the `thread_pool_shared_` object in the [`Engine`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.h#L421) which manages all the information related to the threads associated to the reentrant backward calls.", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```c++\n struct ThreadPoolShared {\n unsigned int num_workers_;\n std::condition_variable work_;\n std::mutex mutex_;\n std::queue> graphtasks_queue_;\n\n // NOLINTNEXTLINE(cppcoreguidelines-pro-type-member-init)\n ThreadPoolShared() : num_workers_(0) {}\n };\n\n\n\n ```\n\n[`ThreadPoolShared`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.h#L398-L414) is a simple container holding a queue of `GraphTask` objects with synchronization mechanisms and the number of current workers.\n\nNow it is easy to understand how `add_thread_pool_task` creates a thread when there are `graph_task` objects enqueued and insufficient workers to process them.\n\n`add_thread_pool_task` initializes a thread by executing [`reentrant_thread_init`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/engine.cpp#L471-L493)", "metadata": {"source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"}}
+{"page_content": "```c++\nvoid Engine::reentrant_thread_init() {\n at::init_num_threads();\n auto tp_shared = thread_pool_shared_;\n while(true) {\n std::unique_lock lk(tp_shared->mutex_);\n ++thread_pool_shared_->num_workers_;\n tp_shared->work_.wait(lk, [&tp_shared]{ return !tp_shared->graphtasks_queue_.empty();});\n --thread_pool_shared_->num_workers_;\n auto task = tp_shared->graphtasks_queue_.front();\n tp_shared->graphtasks_queue_.pop();\n lk.unlock();\n std::shared_ptr