repo_id
stringlengths 21
96
| file_path
stringlengths 31
155
| content
stringlengths 1
92.9M
| __index_level_0__
int64 0
0
|
---|---|---|---|
rapidsai_public_repos/deployment/source/tools
|
rapidsai_public_repos/deployment/source/tools/kubernetes/dask-operator.md
|
# Dask Operator
Many libraries in RAPIDS can leverage Dask to scale out computation onto multiple GPUs and multiple nodes.
[Dask has an operator for Kubernetes](https://kubernetes.dask.org/en/latest/operator.html) which allows you to launch Dask clusters as native Kubernetes resources.
With the operator and associated Custom Resource Definitions (CRDs)
you can create `DaskCluster`, `DaskWorkerGroup` and `DaskJob` resources that describe your Dask components and the operator will
create the appropriate Kubernetes resources like `Pods` and `Services` to launch the cluster.
```{mermaid}
graph TD
DaskJob(DaskJob)
DaskCluster(DaskCluster)
SchedulerService(Scheduler Service)
SchedulerPod(Scheduler Pod)
DaskWorkerGroup(DaskWorkerGroup)
WorkerPodA(Worker Pod A)
WorkerPodB(Worker Pod B)
WorkerPodC(Worker Pod C)
JobPod(Job Runner Pod)
DaskJob --> DaskCluster
DaskJob --> JobPod
DaskCluster --> SchedulerService
SchedulerService --> SchedulerPod
DaskCluster --> DaskWorkerGroup
DaskWorkerGroup --> WorkerPodA
DaskWorkerGroup --> WorkerPodB
DaskWorkerGroup --> WorkerPodC
classDef dask stroke:#FDA061,stroke-width:4px
classDef dashed stroke-dasharray: 5 5
class DaskJob dask
class DaskCluster dask
class DaskWorkerGroup dask
class SchedulerService dashed
class SchedulerPod dashed
class WorkerPodA dashed
class WorkerPodB dashed
class WorkerPodC dashed
class JobPod dashed
```
## Installation
Your Kubernetes cluster must have GPU nodes and have [up to date NVIDIA drivers installed](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/getting-started.html).
To install the Dask operator follow the [instructions in the Dask documentation](https://kubernetes.dask.org/en/latest/operator_installation.html).
## Configuring a RAPIDS `DaskCluster`
To configure the `DaskCluster` resource to run RAPIDS you need to set a few things:
- The container image must contain RAPIDS, the [official RAPIDS container images](/tools/rapids-docker) are a good choice for this.
- The Dask workers must be configured with one or more NVIDIA GPU resources.
- The worker command must be set to `dask-cuda-worker`.
## Example using `kubectl`
Here is an example resource manifest for launching a RAPIDS Dask cluster.
```yaml
# rapids-dask-cluster.yaml
apiVersion: kubernetes.dask.org/v1
kind: DaskCluster
metadata:
name: rapids-dask-cluster
labels:
dask.org/cluster-name: rapids-dask-cluster
spec:
worker:
replicas: 2
spec:
containers:
- name: worker
image: "{{ rapids_container }}"
imagePullPolicy: "IfNotPresent"
args:
- dask-cuda-worker
- --name
- $(DASK_WORKER_NAME)
resources:
limits:
nvidia.com/gpu: "1"
scheduler:
spec:
containers:
- name: scheduler
image: "{{ rapids_container }}"
imagePullPolicy: "IfNotPresent"
env:
args:
- dask-scheduler
ports:
- name: tcp-comm
containerPort: 8786
protocol: TCP
- name: http-dashboard
containerPort: 8787
protocol: TCP
readinessProbe:
httpGet:
port: http-dashboard
path: /health
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
port: http-dashboard
path: /health
initialDelaySeconds: 15
periodSeconds: 20
service:
type: ClusterIP
selector:
dask.org/cluster-name: rapids-dask-cluster
dask.org/component: scheduler
ports:
- name: tcp-comm
protocol: TCP
port: 8786
targetPort: "tcp-comm"
- name: http-dashboard
protocol: TCP
port: 8787
targetPort: "http-dashboard"
```
You can create this cluster with `kubectl`.
```console
$ kubectl apply -f rapids-dask-cluster.yaml
```
### Manifest breakdown
Let's break this manifest down section by section.
#### Metadata
At the top we see the `DaskCluster` resource type and general metadata.
```yaml
apiVersion: kubernetes.dask.org/v1
kind: DaskCluster
metadata:
name: rapids-dask-cluster
labels:
dask.org/cluster-name: rapids-dask-cluster
spec:
worker:
# ...
scheduler:
# ...
```
Then inside the `spec` we have `worker` and `scheduler` sections.
#### Worker
The worker contains a `replicas` option to set how many workers you need and a `spec` that describes what each worker pod should look like.
The spec is a nested [`Pod` spec](https://kubernetes.io/docs/concepts/workloads/pods/) that the operator will use when creating new `Pod` resources.
```yaml
# ...
spec:
worker:
replicas: 2
spec:
containers:
- name: worker
image: "{{ rapids_container }}"
imagePullPolicy: "IfNotPresent"
args:
- dask-cuda-worker
- --name
- $(DASK_WORKER_NAME)
resources:
limits:
nvidia.com/gpu: "1"
scheduler:
# ...
```
Inside our pod spec we are configuring one container that uses the `rapidsai/rapidsai-core` container image.
It also sets the `args` to start the `dask-cuda-worker` and configures one NVIDIA GPU.
#### Scheduler
Next we have a `scheduler` section that also contains a `spec` for the scheduler pod and a `service` which will be used by the operator to create a `Service` resource to expose the scheduler.
```yaml
# ...
spec:
worker:
# ...
scheduler:
spec:
containers:
- name: scheduler
image: "{{ rapids_container }}"
imagePullPolicy: "IfNotPresent"
args:
- dask-scheduler
ports:
- name: tcp-comm
containerPort: 8786
protocol: TCP
- name: http-dashboard
containerPort: 8787
protocol: TCP
readinessProbe:
httpGet:
port: http-dashboard
path: /health
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
port: http-dashboard
path: /health
initialDelaySeconds: 15
periodSeconds: 20
service:
# ...
```
For the scheduler pod we are also setting the `rapidsai/rapidsai-core` container image, mainly to ensure our Dask versions match between
the scheduler and workers. We also disable Jupyter and ensure that the `dask-scheduler` command is configured.
Then we configure both the Dask communication port on `8786` and the Dask dashboard on `8787` and add some probes so that Kubernetes can monitor
the health of the scheduler.
```{note}
The ports must have the `tcp-` and `http-` prefixes if your Kubernetes cluster uses [Istio](https://istio.io/) to ensure the [Envoy proxy](https://www.envoyproxy.io/) doesn't mangle the traffic.
```
Then we configure the `Service`.
```yaml
# ...
spec:
worker:
# ...
scheduler:
spec:
# ...
service:
type: ClusterIP
selector:
dask.org/cluster-name: rapids-dask-cluster
dask.org/component: scheduler
ports:
- name: tcp-comm
protocol: TCP
port: 8786
targetPort: "tcp-comm"
- name: http-dashboard
protocol: TCP
port: 8787
targetPort: "http-dashboard"
```
This example shows using a `ClusterIP` service which will not expose the Dask cluster outside of Kubernetes. If you prefer you could set this to
[`LoadBalancer`](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer) or [`NodePort`](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) to make this externally accessible.
It has a `selector` that matches the scheduler pod and the same ports configured.
### Accessing your Dask cluster
Once you have created your `DaskCluster` resource we can use `kubectl` to check the status of all the other resources it created for us.
```console
$ kubectl get all -l dask.org/cluster-name=rapids-dask-cluster
NAME READY STATUS RESTARTS AGE
pod/rapids-dask-cluster-default-worker-group-worker-0c202b85fd 1/1 Running 0 4m13s
pod/rapids-dask-cluster-default-worker-group-worker-ff5d376714 1/1 Running 0 4m13s
pod/rapids-dask-cluster-scheduler 1/1 Running 0 4m14s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/rapids-dask-cluster-service ClusterIP 10.96.223.217 <none> 8786/TCP,8787/TCP 4m13s
```
Here you can see our scheduler pod and two worker pods along with the scheduler service.
If you have a Python session running within the Kubernetes cluster (like the [example one on the Kubernetes page](/platforms/kubernetes)) you should be able
to connect a Dask distributed client directly.
```python
from dask.distributed import Client
client = Client("rapids-dask-cluster-scheduler:8786")
```
Alternatively if you are outside of the Kubernetes cluster you can change the `Service` to use [`LoadBalancer`](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer) or [`NodePort`](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) or use `kubectl` to port forward the connection locally.
```console
$ kubectl port-forward svc/rapids-dask-cluster-service 8786:8786
Forwarding from 127.0.0.1:8786 -> 8786
```
```python
from dask.distributed import Client
client = Client("localhost:8786")
```
## Example using `KubeCluster`
In additon to creating clusters via `kubectl` you can also do so from Python with {class}`dask_kubernetes.operator.KubeCluster`. This class implements the Dask Cluster Manager interface and under the hood creates and manages the `DaskCluster` resource for you.
```python
from dask_kubernetes.operator import KubeCluster
cluster = KubeCluster(
name="rapids-dask",
image="{{ rapids_container }}",
n_workers=3,
resources={"limits": {"nvidia.com/gpu": "1"}},
worker_command="dask-cuda-worker",
)
```
If we check with `kubectl` we can see the above Python generated the same `DaskCluster` resource as the `kubectl` example above.
```console
$ kubectl get daskclusters
NAME AGE
rapids-dask-cluster 3m28s
$ kubectl get all -l dask.org/cluster-name=rapids-dask-cluster
NAME READY STATUS RESTARTS AGE
pod/rapids-dask-cluster-default-worker-group-worker-07d674589a 1/1 Running 0 3m30s
pod/rapids-dask-cluster-default-worker-group-worker-a55ed88265 1/1 Running 0 3m30s
pod/rapids-dask-cluster-default-worker-group-worker-df785ab050 1/1 Running 0 3m30s
pod/rapids-dask-cluster-scheduler 1/1 Running 0 3m30s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/rapids-dask-cluster-service ClusterIP 10.96.200.202 <none> 8786/TCP,8787/TCP 3m30s
```
With this cluster object in Python we can also connect a client to it directly without needing to know the address as Dask will discover that for us. It also automatically sets up port forwarding if you are outside of the Kubernetes cluster.
```python
from dask.distributed import Client
client = Client(cluster)
```
This object can also be used to scale the workers up and down.
```python
cluster.scale(5)
```
And to manually close the cluster.
```python
cluster.close()
```
```{note}
By default the `KubeCluster` command registers an exit hook so when the Python process exits the cluster is deleted automatically. You can disable this by setting `KubeCluster(..., shutdown_on_close=False)` when launching the cluster.
This is useful if you have a multi-stage pipeline made up of multiple Python processes and you want your Dask cluster to persist between them.
You can also connect a `KubeCluster` object to your existing cluster with `cluster = KubeCluster.from_name(name="rapids-dask")` if you wish to use the cluster or manually call `cluster.close()` in the future.
```
```{relatedexamples}
```
| 0 |
rapidsai_public_repos/deployment/source/tools
|
rapidsai_public_repos/deployment/source/tools/kubernetes/dask-helm-chart.md
|
# Dask Helm Chart
Dask has a [Helm Chart](https://github.com/dask/helm-chart) that creates the following resources:
- 1 x Jupyter server (preconfigured to access the Dask cluster)
- 1 x Dask scheduler
- 3 x Dask workers that connect to the scheduler (scalable)
This helm chart can be configured to run RAPIDS by providing GPUs to the Jupyter server and Dask workers and by using container images with the RAPIDS libraries available.
## Configuring RAPIDS
Built on top of the Dask Helm Chart, `rapids-config.yaml` file contains additional configurations required to setup RAPIDS environment.
```yaml
# rapids-config.yaml
scheduler:
image:
repository: "{{ rapids_container.split(":")[0] }}"
tag: "{{ rapids_container.split(":")[1] }}"
worker:
image:
repository: "{{ rapids_container.split(":")[0] }}"
tag: "{{ rapids_container.split(":")[1] }}"
dask_worker: "dask_cuda_worker"
replicas: 3
resources:
limits:
nvidia.com/gpu: 1
jupyter:
image:
repository: "{{ rapids_container.split(":")[0] }}"
tag: "{{ rapids_container.split(":")[1] }}"
servicePort: 8888
# Default password hash for "rapids"
password: "argon2:$argon2id$v=19$m=10240,t=10,p=8$TBbhubLuX7efZGRKQqIWtw$RG+jCBB2KYF2VQzxkhMNvHNyJU9MzNGTm2Eu2/f7Qpc"
resources:
limits:
nvidia.com/gpu: 1
```
`[jupyter|scheduler|worker].image.*` is updated with the RAPIDS "runtime" image from the stable release,
which includes environment necessary to launch run accelerated libraries in RAPIDS, and scaling up and down via dask.
Note that all scheduler, worker and jupyter pods are required to use the same image.
This ensures that dask scheduler and worker versions match.
`[jupyter|worker].resources` explicitly requests a GPU for each worker pod and the Jupyter pod, required by many accelerated libraries in RAPIDS.
`worker.dask_worker` is the launch command for dask worker inside worker pod.
To leverage the GPUs assigned to each Pod the [`dask_cuda_worker`](https://docs.rapids.ai/api/dask-cuda/stable/index.html) command is launched in place of the regular `dask_worker`.
If desired to have a different jupyter notebook password than default, compute the hash for `<your-password>` and update `jupyter.password`.
You can compute password hash by following the [jupyter notebook guide](https://jupyter-notebook.readthedocs.io/en/stable/public_server.html?highlight=passwd#preparing-a-hashed-password).
### Installing the Helm Chart
```console
$ helm repo add dask https://helm.dask.org
$ helm repo update
$ helm install rapids-release dask/dask -f rapids-config.yaml
```
This will deploy the cluster with the same topography as dask helm chart,
see [dask helm chart documentation for detail](https://artifacthub.io/packages/helm/dask/dask).
```{note}
By default, the Dask Helm Chart will not create an `Ingress` resource.
A custom `Ingress` may be configured to consume external traffic and redirect to corresponding services.
```
For simplicity, this guide will setup access to the Jupyter server via port forwarding.
## Running Rapids Notebook
First, setup port forwarding from the cluster to external port:
```console
# For the Jupyter server
$ kubectl port-forward --address 127.0.0.1 service/rapids-release-dask-jupyter 8888:8888
# For the Dask dashboard
$ kubectl port-forward --address 127.0.0.1 service/rapids-release-dask-scheduler 8787:8787
```
Open a browser and visit `localhost:8888` to access Jupyter,
and `localhost:8787` for the dask dashboard.
Enter the password (default is `rapids`) and access the notebook environment.
### Notebooks and Cluster Scaling
Now we can verify that everything is working correctly by running some of the example notebooks.
Open the `10 Minutes to cuDF and Dask-cuDF` notebook under `cudf/10-min.ipynb`.
Add a new cell at the top to connect to the Dask cluster. Conveniently, the helm chart preconfigures the scheduler address in client's environment.
So you do not need to pass any config to the `Client` object.
```python
from dask.distributed import Client
client = Client()
client
```
By default, we can see 3 workers are created and each has 1 GPU assigned.

Walk through the examples to validate that the dask cluster is setup correctly, and that GPUs are accessible for the workers.
Worker metrics can be examined in dask dashboard.

In case you want to scale up the cluster with more GPU workers, you may do so via `kubectl` or via `helm upgrade`.
```bash
$ kubectl scale deployment rapids-release-dask-worker --replicas=8
# or
$ helm upgrade --set worker.replicas=8 rapids-release dask/dask
```

```{relatedexamples}
```
| 0 |
rapidsai_public_repos/deployment/source/tools
|
rapidsai_public_repos/deployment/source/tools/kubernetes/dask-kubernetes.md
|
# Dask Kubernetes
This article introduces the classic way to setup RAPIDS with `dask-kubernetes`.
## Prerequisite
- A kubernetes cluster that can allocate GPU pods.
- [miniconda](https://docs.conda.io/en/latest/miniconda.html)
## Client environment setup
The client environment is used to setup dask cluster and execute user domain code.
In this demo we need to meet minimum requirement of `cudf`, `dask-cudf` and
`dask-kubernetes`. It is recommended to follow RAPIDS
[release-selector](https://docs.rapids.ai/install#selector) to setup your environment.
For simplicity, this guide assumes user manages environments with conda and installs
The minimal requirements mentioned above by:
```bash
conda create -n rapids {{ rapids_conda_channels }} \
{{ rapids_conda_packages }} dask-kubernetes
```
## Cluster setup
User may create dask-cuda cluster either via `KubeCluster` interface:
```python
from dask_kubernetes import KubeCluster, make_pod_spec
gpu_worker_spec = make_pod_spec(
image="{{ rapids_container }}",
cpu_limit=2,
cpu_request=2,
memory_limit="3G",
memory_request="3G",
gpu_limit=1,
)
cluster = KubeCluster(gpu_worker_spec)
```
Alternatively, user can specify pod specs with standard kubernetes pod specification.
```yaml
# gpu-worker-spec.yaml
kind: Pod
metadata:
labels:
cluster_type: dask
dask_type: GPU_worker
spec:
restartPolicy: Never
containers:
- image: { { rapids_container } }
imagePullPolicy: IfNotPresent
args: [dask-cuda-worker, $(DASK_SCHEDULER_ADDRESS), --rmm-managed-memory]
name: dask-cuda
resources:
limits:
cpu: "2"
memory: 3G
nvidia.com/gpu: 1 # requesting 1 GPU
requests:
cpu: "2"
memory: 3G
nvidia.com/gpu: 1 # requesting 1 GPU
```
Load the spec via:
```python
cluster = KubeCluster("gpu-worker-spec.yaml")
```
```{note}
It is recommended that the client's and scheduler's dask version match. User should pick
the same RAPIDS version when installing client environment and when picking the image for
worker pods.
```
At this point, a cluster containing a single dask-scheduler pod is setup.
To create the worker pods, use `cluster.scale`.
```python
cluster.scale(3)
```
### Verification
Create a small `dask_cudf` dataframe and compute the result on the cluster:
```python
import cudf, dask_cudf
ddf = dask_cudf.from_cudf(cudf.DataFrame({"a": list(range(20))}), npartitions=2)
ddf.sum().compute()
# should print a: 190
```
```{relatedexamples}
```
| 0 |
rapidsai_public_repos/deployment/source
|
rapidsai_public_repos/deployment/source/platforms/databricks.md
|
# Databricks
You can install RAPIDS on Databricks in a few different ways:
1. Accelerate machine learning workflows in a single-node GPU notebook environment
2. Spark users can install [RAPIDS Accelerator for Apache Spark 3.x on Databricks](https://docs.nvidia.com/spark-rapids/user-guide/latest/getting-started/databricks.html)
3. Install Dask alongside Spark and then use libraries like `dask-cudf` for multi-node workloads
## Single-node GPU Notebook environment
(launch-databricks-cluster)=
### Launch cluster
To get started with a single-node Databricks cluster, navigate to the **All Purpose Compute** tab of the **Compute** section in Databricks and select **Create Compute**. Name your cluster and choose "Single node".

In order to launch a GPU node uncheck **Use Photon Acceleration**.

Then expand the **Advanced Options** section and open the **Docker** tab. Select **Use your own Docker container** and enter the image `databricksruntime/gpu-tensorflow:cuda11.8` or `databricksruntime/gpu-pytorch:cuda11.8`.

Once you have completed, the "GPU accelerated" nodes should be available in the **Node type** dropdown.

Select **Create Compute**
### Install RAPIDS
Once your cluster has started, you can create a new notebook or open an existing one from the `/Workspace` directory then attach it to your running cluster.
````{warning}
At the time of writing the `databricksruntime/gpu-pytorch:cuda11.8` image does not contain the full `cuda-toolkit` so if you selected that one you will need to install that before installing RAPIDS.
```text
!cd /etc/apt/sources.list.d && \
mv cuda-ubuntu2204-x86_64.list.disabled cuda-ubuntu2204-x86_64.list && \
apt-get update && apt-get --no-install-recommends -y install cuda-toolkit-11-8 && \
mv cuda-ubuntu2204-x86_64.list cuda-ubuntu2204-x86_64.list.disabled
```
````
At the top of your notebook run any of the following pip install commands to install your preferred RAPIDS libraries.
```text
!pip install cudf-cu11 dask-cudf-cu11 --extra-index-url=https://pypi.nvidia.com
!pip install cuml-cu11 --extra-index-url=https://pypi.nvidia.com
!pip install cugraph-cu11 --extra-index-url=https://pypi.nvidia.com
```
### Test RAPIDS
```python
import cudf
gdf = cudf.DataFrame({"a":[1,2,3],"b":[4,5,6]})
gdf
a b
0 1 4
1 2 5
2 3 6
```
## Multi-node Dask cluster
Dask now has a [dask-databricks](https://github.com/jacobtomlinson/dask-databricks) CLI tool (via [`conda`](https://github.com/conda-forge/dask-databricks-feedstock) and [`pip`](https://pypi.org/project/dask-databricks/)) to simplify the Dask cluster startup process within Databricks.
### Create init-script
To get started, you must first configure an [initialization script](https://docs.databricks.com/en/init-scripts/index.html) to install `dask`, `dask-databricks` RAPIDS libraries and all other dependencies for your project.
Databricks recommends using [cluster-scoped](https://docs.databricks.com/en/init-scripts/cluster-scoped.html) init scripts stored in the workspace files.
Navigate to the top-left **Workspace** tab and click on your **Home** directory then select **Add** > **File** from the menu. Create an `init.sh` script with contents:
```bash
#!/bin/bash
set -e
# The Databricks Python directory isn't on the path in
# databricksruntime/gpu-tensorflow:cuda11.8 for some reason
export PATH="/databricks/python/bin:$PATH"
# Install RAPIDS (cudf & dask-cudf) and dask-databricks
/databricks/python/bin/pip install --extra-index-url=https://pypi.nvidia.com \
cudf-cu11 \
dask[complete] \
dask-cudf-cu11 \
dask-cuda=={rapids_version} \
dask-databricks
# Start the Dask cluster with CUDA workers
dask databricks run --cuda
```
**Note**: By default, the `dask databricks run` command will launch a dask scheduler in the driver node and standard workers on remaining nodes.
To launch a dask cluster with GPU workers, you must parse in `--cuda` flag option.
### Launch Dask cluster
Once your script is ready, follow the same [instructions](launch-databricks-cluster) to launch a **Multi-node** Databricks cluster.
After docker setup in **Advanced Options**, switch to the **Init Scripts** tab and add the file path to the init-script in your Workspace directory starting with `/Users/<user-name>/<script-name>.sh`.
You can also configure cluster log delivery in the **Logging** tab, which will write the init script logs to DBFS in a subdirectory called `dbfs:/cluster-logs/<cluster-id>/init_scripts/`. Refer to [docs](https://docs.databricks.com/en/init-scripts/logs.html) for more information.

Now you should be able to select a "GPU-Accelerated" instance for both **Worker** and **Driver** nodes.

### Connect to Client
To test RAPIDS, Connect to the dask client and submit tasks.
```python
import dask_databricks
client = dask_databricks.get_client()
client
```
The **[Dask dashboard](https://docs.dask.org/en/latest/dashboard.html)** provides a web-based UI with visualizations and real-time information about the Dask cluster's status i.e task progress, resource utilization, etc.
The Dask dashboard server will start up automatically when the scheduler is created, and is hosted on ports `8087` by default.
To access, follow the provided URL link to the dashboard status endpoint from within Databricks.

```python
import cudf
import dask
df = dask.datasets.timeseries().map_partitions(cudf.from_pandas)
df.x.mean().compute()
```

### Clean up
```python
client.close()
```
| 0 |
rapidsai_public_repos/deployment/source
|
rapidsai_public_repos/deployment/source/platforms/index.md
|
---
html_theme.sidebar_secondary.remove: true
---
# Platforms
`````{gridtoctree} 1 2 2 3
:gutter: 2 2 2 2
````{grid-item-card}
:link: kubernetes
:link-type: doc
Kubernetes
^^^
Launch RAPIDS containers and cluster on Kubernetes with various tools.
{bdg}`single-node`
{bdg}`multi-node`
````
````{grid-item-card}
:link: kubeflow
:link-type: doc
Kubeflow
^^^
Integrate RAPIDS with Kubeflow notebooks and pipelines.
{bdg}`single-node`
{bdg}`multi-node`
````
````{grid-item-card}
:link: kserve
:link-type: doc
KServe
^^^
Deploy RAPIDS models with KServe, a standard model inference platform
for Kubernetes.
{bdg}`multi-node`
````
````{grid-item-card}
:link: coiled
:link-type: doc
Coiled
^^^
Run RAPIDS on Coiled.
{bdg}`multi-node`
````
````{grid-item-card}
:link: databricks
:link-type: doc
Databricks
^^^
Run RAPIDS on Databricks.
{bdg}`single-node`
````
````{grid-item-card}
:link: colab
:link-type: doc
Google Colab
^^^
Run RAPIDS on Google Colab.
{bdg}`single-node`
````
`````
| 0 |
rapidsai_public_repos/deployment/source
|
rapidsai_public_repos/deployment/source/platforms/coiled.md
|
# Coiled
You can deploy RAPIDS on a multi-node Dask cluster with GPUs using [Coiled](https://www.coiled.io/).
By using the [`coiled`](https://anaconda.org/conda-forge/coiled) Python library, you can setup and manage Dask clusters with GPUs and RAPIDs on cloud computing environments such as GCP or AWS.
Coiled clusters can also run Jupyter which is useful if you don't have a local GPU.
## Quickstart
### Login
To get started you need to install Coiled and login.
```console
$ conda install -c conda-forge coiled
$ coiled setup
```
For more information see the [Coiled Getting Started documentation](https://docs.coiled.io/user_guide/getting_started.html).
### Software Environment
Next you'll need to register a RAPIDS software environment with Coiled.
You can either build this from the official RAPIDS Container images.
```python
import coiled
coiled.create_software_environment(
name="rapids-{{ rapids_version.replace(".", "-") }}",
container="{{ rapids_container }}",
)
```
Or you can create it with a list of conda packages in case you want to customize the environment further.
```python
import coiled
coiled.create_software_environment(
name="rapids-{{ rapids_version.replace(".", "-") }}",
gpu_enabled=True, # sets CUDA version for Conda to ensure GPU version of packages get installed
conda={
"channels": {{ rapids_conda_channels_list }},
"dependencies": {{ rapids_conda_packages_list + ["jupyterlab"] }},
},
)
```
```{note}
If you want to use the remote Jupyter feature you'll need to ensure your environment has `jupyterlab` installed which is included in the container image but not the conda package so it needs to be specified.
```
### Cluster creation
Now you can launch a cluster with this environment.
```python
cluster = coiled.Cluster(
software="rapids-{{ rapids_version.replace(".", "-") }}", # specify the software env you just created
jupyter=True, # run Jupyter server on scheduler
scheduler_gpu=True, # add GPU to scheduler
n_workers=4,
worker_gpu=1, # single T4 per worker
worker_class="dask_cuda.CUDAWorker", # recommended
)
```
Once the cluster has started you can also get the Jupyter URL and navigate to Jupyter Lab running on the Dask Scheduler node.
```python
>>> print(cluster.jupyter_link)
https://cluster-abc123.dask.host/jupyter/lab?token=dddeeefff444555666
```
We can run `!nvidia-smi` in our notebook to see information on the GPU available to Jupyter.
We can also connect a Dask client to see that information for the workers too.
```python
from dask.distributed import Client
client = Client
client
```

From this Jupyter session we can see that our notebook server has a GPU and we can connect to the Dask cluster with no configuration and see all the Dask Workers have GPUs too.
```{relatedexamples}
```
| 0 |
rapidsai_public_repos/deployment/source
|
rapidsai_public_repos/deployment/source/platforms/kubeflow.md
|
# Kubeflow
You can use RAPIDS with Kubeflow in a single pod with [Kubeflow Notebooks](https://www.kubeflow.org/docs/components/notebooks/) or you can scale out to many pods on many nodes of the Kubernetes cluster with the [dask-operator](/tools/kubernetes/dask-operator).
```{note}
These instructions were tested against [Kubeflow v1.5.1](https://github.com/kubeflow/manifests/releases/tag/v1.5.1) running on [Kubernetes v1.21](https://kubernetes.io/blog/2021/04/08/kubernetes-1-21-release-announcement/). Visit [Installing Kubeflow](https://www.kubeflow.org/docs/started/installing-kubeflow/) for instructions on installing Kubeflow on your Kubernetes cluster.
```
## Kubeflow Notebooks
The [RAPIDS docker images](/tools/rapids-docker) can be used directly in Kubeflow Notebooks with no additional configuration. To find the latest image head to [the RAPIDS install page](https://docs.rapids.ai/install), as shown in below, and choose a version of RAPIDS to use. Typically we want to choose the container image for the latest release. Verify the Docker image is selected when installing the latest RAPIDS release.
Be sure to match the CUDA version in the container image with that installed on your Kubernetes nodes. The default CUDA version installed on GKE Stable is 11.4 for example, so we would want to choose that. From 11.5 onwards it doesn’t matter as they will be backward compatible. Copy the container image name from the install command (i.e. `{{ rapids_container }}`).
````{note}
You can [check your CUDA version](https://jacobtomlinson.dev/posts/2022/how-to-check-your-nvidia-driver-and-cuda-version-in-kubernetes/) by creating a pod and running `nvidia-smi`. For example:
```console
$ kubectl run nvidia-smi --restart=Never --rm -i --tty --image nvidia/cuda:11.0.3-base-ubuntu20.04 -- nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 495.46 Driver Version: 495.46 CUDA Version: 11.5 |
|-------------------------------+----------------------+----------------------+
...
```
````
Now in Kubeflow, access the Notebooks tab on the left and click “New Notebook”.
```{figure} /images/kubeflow-create-notebook.png
---
alt: Screenshot of the Kubeflow Notebooks page with the “New Notebook” button highlighted
---
```
On this page, we must set a few configuration options. First, let’s give it a name like `rapids`. We need to check the “use custom image” box and paste in the container image we got from the RAPIDS release selector. Then, we want to set the CPU and RAM to something a little higher (i.e. 2 CPUs and 8GB memory) and set the number of NVIDIA GPUs to 1.
```{figure} /images/kubeflow-new-notebook.png
---
alt: Screenshot of the Kubeflow Notebooks page
---
New Kubeflow notebook form named rapids with the custom RAPIDS container image, 2 CPU cores, 8GB of RAM, and 1 NVIDIA GPU selected
```
Then, you can scroll to the bottom of the page and hit launch. You should see it starting up in your list. The RAPIDS container images are packed full of amazing tools so this step can take a little while.
```{figure} /images/kubeflow-notebook-running.png
---
alt: Screenshot of the Kubeflow Notebooks page showing the rapids notebook starting up
---
Once the Notebook is ready, click Connect to launch Jupyter.
```
You can verify everything works okay by opening a terminal in Jupyter and running:
```console
$ nvidia-smi
```
```{figure} /images/kubeflow-jupyter-nvidia-smi.png
---
alt: Screenshot of a terminal open in Juputer Lab with the output of the nvidia-smi command listing one A100 GPU
---
There is one A100 GPU listed which is available for use in your Notebook.
```
The RAPIDS container also comes with some example notebooks which you can find in `/rapids/notebooks`. You can make a symbolic link to these from your home directory so you can easily navigate using the file explorer on the left `ln -s /rapids/notebooks /home/jovyan/notebooks`.
Now you can navigate those example notebooks and explore all the libraries RAPIDS offers. For example, ETL developers that use [Pandas](https://pandas.pydata.org/) should check out the [cuDF](https://docs.rapids.ai/api/cudf/stable/) notebooks for examples of accelerated dataframes.
```{figure} /images/kubeflow-jupyter-example-notebook.png
---
alt: Screenshot of Jupyter Lab with the “10 minutes to cuDF and dask-cuDF” notebook open
---
```
## Scaling out to many GPUs
Many of the RAPIDS libraries also allow you to scale out your computations onto many GPUs spread over many nodes for additional acceleration. To do this we leverage [Dask](https://www.dask.org/), an open source Python library for distributed computing.
To use Dask, we need to create a scheduler and some workers that will perform our calculations. These workers will also need GPUs and the same Python environment as your notebook session. Dask has [an operator for Kubernetes](/tools/kubernetes/dask-operator) that you can use to manage Dask clusters on your Kubeflow cluster.
### Installing the Dask Kubernetes operator
To install the operator we need to create any custom resources and the operator itself, please [refer to the documentation](https://kubernetes.dask.org/en/latest/operator_installation.html) to find up-to-date installation instructions. From the terminal run the following command.
```console
$ helm install --repo https://helm.dask.org --create-namespace -n dask-operator --generate-name dask-kubernetes-operator
NAME: dask-kubernetes-operator-1666875935
NAMESPACE: dask-operator
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Operator has been installed successfully.
```
Verify our resources were applied successfully by listing our Dask clusters. Don’t expect to see any resources yet but the command should succeed.
```console
$ kubectl get daskclusters
No resources found in default namespace.
```
You can also check the operator pod is running and ready to launch new Dask clusters.
```console
$ kubectl get pods -A -l app.kubernetes.io/name=dask-kubernetes-operator
NAMESPACE NAME READY STATUS RESTARTS AGE
dask-operator dask-kubernetes-operator-775b8bbbd5-zdrf7 1/1 Running 0 74s
```
Lastly, ensure that your notebook session can create and manage Dask custom resources. To do this you need to edit the `kubeflow-kubernetes-edit` cluster role that gets applied to notebook pods. Add a new rule to the rules section for this role to allow everything in the `kubernetes.dask.org` API group.
```console
$ kubectl edit clusterrole kubeflow-kubernetes-edit
…
rules:
…
- apiGroups:
- "kubernetes.dask.org"
verbs:
- "*"
resources:
- "*"
…
```
### Creating a Dask cluster
Now you can create `DaskCluster` resources in Kubernetes that will launch all the necessary pods and services for our cluster to work. This can be done in YAML via the Kubernetes API or using the Python API from a notebook session as shown in this section.
In a Jupyter session, create a new notebook and install the `dask-kubernetes` package which you will need to launch Dask clusters.
```ipython
!pip install dask-kubernetes
```
Next, create a Dask cluster using the `KubeCluster` class. Set the container image to match the one used for your notebook environment and set the number of GPUs to 1. Also tell the RAPIDS container not to start Jupyter by default and run our Dask command instead.
This can take a similar amount of time to starting up the notebook container as it will also have to pull the RAPIDS docker image.
```python
from dask_kubernetes.experimental import KubeCluster
cluster = KubeCluster(
name="rapids-dask",
image="{{ rapids_container }}",
worker_command="dask-cuda-worker",
n_workers=2,
resources={"limits": {"nvidia.com/gpu": "1"}},
)
```
```{figure} /images/kubeflow-jupyter-dask-cluster-widget.png
---
alt: Screenshot of the Dask cluster widget in Jupyter Lab showing two workers with A100 GPUs
---
This creates a Dask cluster with two workers, and each worker has an A100 GPU the same as your Jupyter session
```
You can scale this cluster up and down either with the scaling tab in the widget in Jupyter or by calling `cluster.scale(n)` to set the number of workers (and therefore the number of GPUs).
Now you can connect a Dask client to our cluster and from that point on any RAPIDS libraries that support dask such as `dask_cudf` will use our cluster to distribute our computation over all of our GPUs.
```{figure} /images/kubeflow-jupyter-using-dask.png
---
alt: Screenshot of some cudf code in Jupyter Lab that leverages Dask
---
Here is a short example of creating a `Series` object and distributing it with Dask
```
## Accessing the Dask dashboard from notebooks
When working interactively in a notebook and leveraging a Dask cluster it can be really valuable to see the Dask dashboard. The dashboard is available on the scheduler `Pod` in the Dask cluster so we need to set some extra configuration to make this available from our notebook `Pod`.
To do this, we can apply the following manifest.
```yaml
# configure-dask-dashboard.yaml
apiVersion: "kubeflow.org/v1alpha1"
kind: PodDefault
metadata:
name: configure-dask-dashboard
spec:
selector:
matchLabels:
configure-dask-dashboard: "true"
desc: "configure dask dashboard"
env:
- name: DASK_DISTRIBUTED__DASHBOARD__LINK
value: "{NB_PREFIX}/proxy/{host}:{port}/status"
volumeMounts:
- name: jupyter-server-proxy-config
mountPath: /root/.jupyter/jupyter_server_config.py
subPath: jupyter_server_config.py
volumes:
- name: jupyter-server-proxy-config
configMap:
name: jupyter-server-proxy-config
---
apiVersion: v1
kind: ConfigMap
metadata:
name: jupyter-server-proxy-config
data:
jupyter_server_config.py: |
c.ServerProxy.host_allowlist = lambda app, host: True
```
Create a file with the above contents, and then apply it into your user’s namespace with `kubectl`.
For the default `user@example.com` user it would look like this.
```console
$ kubectl apply -n kubeflow-user-example-com -f configure-dask-dashboard.yaml
```
This configuration file does two things. First it configures the [jupyter-server-proxy](https://github.com/jupyterhub/jupyter-server-proxy) running in your Notebook container to allow proxying to all hosts. We can do this safely because we are relying on Kubernetes (and specifically Istio) to enforce network access controls. It also sets the `distributed.dashboard-link` config option in Dask so that the widgets and `.dashboard_link` attributes of the `KubeCluster` and `Client` objects show a url that uses the Jupyter server proxy.
Once you have created this configuration option you can select it when launching new notebook instances.
```{figure} /images/kubeflow-configure-dashboard-option.png
---
alt: Screenshot of the Kubeflow new notebook form with the “configure dask dashboard” configuration option selected
---
Check the “configure dask dashboard” option
```
You can then follow the links provided by the widgets in your notebook to open the Dask Dashboard in a new tab.
```{figure} /images/kubeflow-dask-dashboard.png
---
alt: Screenshot of the Dask dashboard
---
```
You can also use the [Dask Jupyter Lab extension](https://github.com/dask/dask-labextension) to view various plots and stats about your Dask cluster right in Jupyter Lab. Open up the Dask tab on the left side menu and click the little search icon, this will connect Jupyter lab to the dashboard via the client in your notebook. Then you can click the various plots you want to see and arrange them in Jupyter Lab however you like by dragging the tabs around.
```{figure} /images/kubeflow-jupyter-dask-labextension.png
---
alt: Screenshot of Jupyter Lab with the Dask Lab extension open on the left and various Dask plots arranged on the screen
---
```
```{relatedexamples}
```
| 0 |
rapidsai_public_repos/deployment/source
|
rapidsai_public_repos/deployment/source/platforms/kubernetes.md
|
# Kubernetes
RAPIDS integrates with Kubernetes in many ways depending on your use case.
(interactive-notebook)=
## Interactive Notebook
For single-user interactive sessions you can run the [RAPIDS docker image](/tools/rapids-docker) which contains a conda environment with the RAPIDS libraries and Jupyter for interactive use.
You can run this directly on Kubernetes as a `Pod` and expose Jupyter via a `Service`. For example:
```yaml
# rapids-notebook.yaml
apiVersion: v1
kind: Service
metadata:
name: rapids-notebook
labels:
app: rapids-notebook
spec:
type: NodePort
ports:
- port: 8888
name: http
targetPort: 8888
nodePort: 30002
selector:
app: rapids-notebook
---
apiVersion: v1
kind: Pod
metadata:
name: rapids-notebook
labels:
app: rapids-notebook
spec:
securityContext:
fsGroup: 0
containers:
- name: rapids-notebook
image: "{{ rapids_notebooks_container }}"
resources:
limits:
nvidia.com/gpu: 1
ports:
- containerPort: 8888
name: notebook
```
````{dropdown} Optional: Extended notebook configuration to enable launching multi-node Dask clusters
:color: info
:icon: info
Deploying an interactive single-user notebook can provide a great place to launch further resources. For example you could install `dask-kubernetes` and use the [dask-operator](../tools/kubernetes/dask-operator) to create multi-node Dask clusters from your notebooks.
To do this you'll need to create a couple of extra resources when launching your notebook `Pod`.
### Service account and role
To be able to interact with the Kubernetes API from within your notebook and create Dask resources you'll need to create a service account with an attached role.
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: rapids-dask
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: rapids-dask
rules:
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list"]
- apiGroups: [kubernetes.dask.org]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rapids-dask
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rapids-dask
subjects:
- kind: ServiceAccount
name: rapids-dask
```
Then you need to augment the `Pod` spec above with a reference to this service account.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: rapids-notebook
labels:
app: rapids-notebook
spec:
serviceAccountName: rapids-dask
...
```
### Proxying the Dask dashboard and other services
The RAPIDS container comes with the [jupyter-server-proxy](https://jupyter-server-proxy.readthedocs.io/en/latest/) plugin preinstalled which you can use to access other services running in your notebook via the Jupyter URL. However, by default [this is restricted to only proxying services running within your Jupyter Pod](https://jupyter-server-proxy.readthedocs.io/en/latest/arbitrary-ports-hosts.html). To access other resources like Dask clusters that have been launched in the Kubernetes cluster we need to configure Jupyter to allow this.
First we create a `ConfigMap` with our configuration file.
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: jupyter-server-proxy-config
data:
jupyter_server_config.py: |
c.ServerProxy.host_allowlist = lambda app, host: True
```
Then we further modify out `Pod` spec to mount in this config map to the right location.
```yaml
apiVersion: v1
kind: Pod
...
spec:
containers
- name: rapids-notebook
...
volumeMounts:
- name: jupyter-server-proxy-config
mountPath: /root/.jupyter/jupyter_server_config.py
subPath: jupyter_server_config.py
volumes:
- name: jupyter-server-proxy-config
configMap:
name: jupyter-server-proxy-config
```
We also might want to configure Dask to know where to look for the Dashboard via the proxied URL. We can set this via an environment variable in our `Pod`.
```yaml
apiVersion: v1
kind: Pod
...
spec:
containers
- name: rapids-notebook
...
env:
- name: DASK_DISTRIBUTED__DASHBOARD__LINK
value: "/proxy/{host}:{port}/status"
```
### Putting it all together
Here's an extended `rapids-notebook.yaml` spec putting all of this together.
```yaml
# rapids-notebook.yaml (extended)
apiVersion: v1
kind: ServiceAccount
metadata:
name: rapids-dask
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: rapids-dask
rules:
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list"]
- apiGroups: [kubernetes.dask.org]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rapids-dask
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rapids-dask
subjects:
- kind: ServiceAccount
name: rapids-dask
---
apiVersion: v1
kind: ConfigMap
metadata:
name: jupyter-server-proxy-config
data:
jupyter_server_config.py: |
c.ServerProxy.host_allowlist = lambda app, host: True
---
apiVersion: v1
kind: Service
metadata:
name: rapids-notebook
labels:
app: rapids-notebook
spec:
type: ClusterIP
ports:
- port: 8888
name: http
targetPort: notebook
selector:
app: rapids-notebook
---
apiVersion: v1
kind: Pod
metadata:
name: rapids-notebook
labels:
app: rapids-notebook
spec:
serviceAccountName: rapids-dask
securityContext:
fsGroup: 0
containers:
- name: rapids-notebook
image: {{ rapids_notebooks_container }}
resources:
limits:
nvidia.com/gpu: 1
ports:
- containerPort: 8888
name: notebook
env:
- name: DASK_DISTRIBUTED__DASHBOARD__LINK
value: "/proxy/{host}:{port}/status"
volumeMounts:
- name: jupyter-server-proxy-config
mountPath: /root/.jupyter/jupyter_server_config.py
subPath: jupyter_server_config.py
volumes:
- name: jupyter-server-proxy-config
configMap:
name: jupyter-server-proxy-config
```
````
```console
$ kubectl apply -f rapids-notebook.yaml
```
This makes Jupyter accessible on port `30002` of your Kubernetes nodes via `NodePort` service. Alternatvely you could use a `LoadBalancer` service type [if you have one configured](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/) or a `ClusterIP` and use `kubectl` to port forward the port locally and access it that way.
```console
$ kubectl port-forward service/rapids-notebook 8888
```
Then you can open port `8888` in your browser to access Jupyter and use RAPIDS.
```{figure} /images/kubernetes-jupyter.png
---
alt: Screenshot of the RAPIDS container running Jupyter showing the nvidia-smi command with a GPU listed
---
```
## Helm Chart
Individual users can also install the [Dask Helm Chart](https://helm.dask.org) which provides a `Pod` running Jupyter alongside a Dask cluster consisting of pods running the Dask scheduler and worker components. You can customize this helm chart to run the RAPIDS container images as both the notebook server and Dask cluster components so that everything can benefit from GPU acceleration.
Find out more on the [Dask Helm Chart page](/tools/kubernetes/dask-helm-chart).
(dask-operator)=
## Dask Operator
[Dask has an operator](https://kubernetes.dask.org/en/latest/operator.html) that empowers users to create Dask clusters as native Kubernetes resources. This is useful for creating, scaling and removing Dask clusters dynamically and in a flexible way. Usually this is used in conjunction with an interactive session such as the [interactive notebook](interactive-notebook) example above or from another service like [KubeFlow Notebooks](/platforms/kubeflow). By dynamically launching Dask clusters configured to use RAPIDS on Kubernetes user's can burst beyond their notebook session to many GPUs spreak across many nodes.
Find out more on the [Dask Operator page](/tools/kubernetes/dask-operator).
## Dask Kubernetes (classic)
```{warning}
Unless you are already using the [classic Dask Kubernetes integration](https://kubernetes.dask.org/en/latest/kubecluster.html) we recommend using the [Dask Operator](dask-operator) instead.
```
Dask has an older tool for dynamically launching Dask clusters on Kubernetes that does not use an operator. It is possible to configure this to run RAPIDS too but it is being phased out in favour of the operator.
Find out more on the [Dask Kubernetes page](/tools/kubernetes/dask-kubernetes).
## Dask Gateway
Some organisations may want to provide Dask cluster provisioning as a central service where users are abstracted from the underlying platform like Kubernetes. This can be useful for reducing user permissions, limiting resources that users can consume and exposing things in a centralised way. For this you can deploy Dask Gateway which provides a server that users interact with programatically and in turn launches Dask clusters on Kubernetes and proxies the connection back to the user.
Users can configure what they want their Dask cluster to look like so it is possible to utilize GPUs and RAPIDS for an accelerated cluster.
## KubeFlow
If you are using KubeFlow you can integrate RAPIDS right away by using the RAPIDS container images within notebooks and pipelines and by using the Dask Operator to launch GPU accelerated Dask clusters.
Find out more on the [KubeFlow page](/platforms/kubeflow).
```{relatedexamples}
```
| 0 |
rapidsai_public_repos/deployment/source
|
rapidsai_public_repos/deployment/source/platforms/kserve.md
|
# KServe
[KServe](https://kserve.github.io/website) is a standard model inference platform built for Kubernetes. It provides consistent interface for multiple machine learning frameworks.
In this page, we will show you how to deploy RAPIDS models using KServe.
```{note}
These instructions were tested against KServe v0.10 running on [Kubernetes v1.21](https://kubernetes.io/blog/2021/04/08/kubernetes-1-21-release-announcement/).
```
## Setting up Kubernetes cluster with GPU access
First, you should set up a Kubernetes cluster with access to NVIDIA GPUs. Visit [the Cloud Section](/cloud/index) for guidance.
## Installing KServe
Visit [Getting Started with KServe](https://kserve.github.io/website/latest/get_started/) to install KServe in your Kubernetes cluster. If you are starting out, we recommend the use of the "Quickstart" script (`quick_install.sh`) provided in the page. On the other hand, if you are setting up a production-grade system, follow direction in [Administration Guide](https://kserve.github.io/website/latest/admin/serverless/serverless) instead.
## Setting up First InferenceService
Once KServe is installed, visit [First InferenceService](https://kserve.github.io/website/latest/get_started/first_isvc/) to quickly set up a first inference endpoint. (The example uses the [Support Vector Machine from scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) to classify [the Iris dataset](https://scikit-learn.org/stable/auto_examples/datasets/plot_iris_dataset.html).) Follow through all the steps carefully and make sure everything works. In particular, you should be able to submit inference requests using cURL.
## Setting up InferenceService with Triton-FIL
[The FIL backend for Triton Inference Server](https://github.com/triton-inference-server/fil_backend) (Triton-FIL in short) is an optimized inference runtime for many kinds of tree-based models including: XGBoost, LightGBM, scikit-learn, and cuML RandomForest. We can use Triton-FIL together with KServe and serve any tree-based models.
The following manifest sets up an inference endpoint using Triton-FIL:
```yaml
# triton-fil.yaml
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
name: triton-fil
spec:
predictor:
triton:
storageUri: gs://path-to-gcloud-storage-bucket/model-directory
runtimeVersion: 22.12-py3
```
where `model-directory` is set up with the following hierarchy:
```text
model-directory/
\__ model/
\__ config.pbtxt
\__ 1/
\__ [model file goes here]
```
where `config.pbtxt` contains the configuration for the Triton-FIL backend.
A typical `config.pbtxt` is given below, with explanation interspersed as
`#` comments. Before use, make sure to remove `#` comments and fill in
the blanks.
```text
backend: "fil"
max_batch_size: 32768
input [
{
name: "input__0"
data_type: TYPE_FP32
dims: [ ___ ] # Number of features (columns) in the training data
}
]
output [
{
name: "output__0"
data_type: TYPE_FP32
dims: [ 1 ]
}
]
instance_group [{ kind: KIND_AUTO }]
# Triton-FIL will intelligently choose between CPU and GPU
parameters [
{
key: "model_type"
value: { string_value: "_____" }
# Can be "xgboost", "xgboost_json", "lightgbm", or "treelite_checkpoint"
# See subsections for examples
},
{
key: "output_class"
value: { string_value: "____" }
# true (if classifier), or false (if regressor)
},
{
key: "threshold"
value: { string_value: "0.5" }
# Threshold for predicing the positive class in a binary classifier
}
]
dynamic_batching {}
```
We will show you concrete examples below. But first some general notes:
- The payload JSON will look different from the First InferenceService example:
```json
{
"inputs" : [
{
"name" : "input__0",
"shape" : [ 1, 6 ],
"datatype" : "FP32",
"data" : [0, 0, 0, 0, 0, 0]
],
"outputs" : [
{
"name" : "output__0",
"parameters" : { "classification" : 2 }
}
]
}
```
- Triton-FIL uses v2 version of KServe protocol, so make sure to use `v2` URL when sending inference request:
```console
$ INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}')
$ INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway \
-o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
$ SERVICE_HOSTNAME=$(kubectl get inferenceservice <endpoint name> -n kserve-test \
-o jsonpath='{.status.url}' | cut -d "/" -f 3)
$ curl -v -H "Host: ${SERVICE_HOSTNAME}" -H "Content-Type: application/json" \
"http://${INGRESS_HOST}:${INGRESS_PORT}/v2/models/<endpoint name>/infer" \
-d @./payload.json
```
### XGBoost
To deploy an XGBoost model, save it using the JSON format:
```python
import xgboost as xgb
clf = xgb.XGBClassifier(...)
clf.fit(X, y)
clf.save_model("my_xgboost_model.json") # Note the .json extension
```
Rename the model file to `xgboost.json`, as this is convention used by Triton-FIL.
After moving the model file into the model directory, the directory should look like this:
```text
model-directory/
\__ model/
\__ config.pbtxt
\__ 1/
\__ xgboost.json
```
In `config.pbtxt`, set `model_type="xgboost_json"`.
### cuML RandomForest
To deploy a cuML random forest, save it as a Treelite checkpoint file:
```python
from cuml.ensemble import RandomForestClassifier as cumlRandomForestClassifier
clf = cumlRandomForestClassifier(...)
clf.fit(X, y)
clf.convert_to_treelite_model().to_treelite_checkpoint("./checkpoint.tl")
```
Rename the checkpoint file to `checkpoint.tl`, as this is convention used by Triton-FIL.
After moving the model file into the model directory, the directory should look like this:
```text
model-directory/
\__ model/
\__ config.pbtxt
\__ 1/
\__ checkpoint.tl
```
### Configurating Triton-FIL
Triton-FIL offers many configuration options, and we only showed you a few of them. Please visit [FIL Backend Model Configuration](https://github.com/triton-inference-server/fil_backend/blob/main/docs/model_config.md) to check out the rest.
| 0 |
rapidsai_public_repos/deployment/source
|
rapidsai_public_repos/deployment/source/platforms/colab.md
|
# RAPIDS on Google Colab
## Overview
This guide is broken into two sections:
1. [RAPIDS Quick Install](colab-quick) - applicable for most users
2. [RAPIDS Custom Setup Instructions](colab-custom) - step by step set up instructions covering the **must haves** for when a user needs to adapt instance to their workflows
In both sections, will be installing RAPIDS on colab using pip or conda. Here are the differences between the two installation methods
- Pip installation allows users to install cuDF, cuML, cuGraph, and cuSpatial stable versions in a few minutes (1/5 ease of install)
- Conda installation installs the complete, customized RAPIDS library package (such as installing stable or nightly) however, it can take around 15 minutes to install and has a couple of break points requiring the user to manually continue the installation (2/5 ease of install)
RAPIDS install on Colab strives to be an "always working" solution, and sometimes will **pin** RAPIDS versions to ensure compatiblity.
(colab-quick)=
## Section 1: RAPIDS Quick Install
### Links
Please follow the links below to our install templates:
#### Pip
1. Open the pip template link by clicking this button -->
<a target="_blank" href="https://colab.research.google.com/drive/13sspqiEZwso4NYTbsflpPyNFaVAAxUgr">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a> .
1. Click **Runtime** > **Run All**.
1. Wait a few minutes for the installation to complete without errors.
1. Add your code in the cells below the template.
#### Conda
1. Open the conda template link by clicking this button -->
<a target="_blank" href="https://colab.research.google.com/drive/1TAAi_szMfWqRfHVfjGSqnGVLr_ztzUM9">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a> . There are instructions in the notebook and below. Default to the Notebook's instructions if they deviate, as below is for reference and additional context.
1. Click **Runtime** > **Run All**. This will NOT run all cells as the installation will pause after updating Colab's gcc. Ignore all Colab alerts.
1. Go to the next unrun cell and run it to install conda. The installation will pause again. Ignore all Colab alerts.
1. Run the test and conda install test cell.
1. Before running the RAPIDS install cell, you can change the installation type between `stable` and `nightly`. Leaving it blank or adding any other words will default to 'stable'. All disclaimers around nightly installs apply.
1. Run the rest of the cells to complete the installation of RAPIDS on Colab.
1. Add your code in the cells below the template.
(colab-custom)=
## Section 2: User Customizable RAPIDS Install Instructions
### 1. Launch notebook
To get started in [Google Colab](https://colab.research.google.com/), click `File` at the top toolbar to Create new or Upload existing notebook
### 2. Set the Runtime
Click the `Runtime` dropdown and select `Change Runtime Type`

Choose GPU for Hardware Accelerator

### 3. Check GPU type
Check the output of `!nvidia-smi` to make sure you've been allocated a Rapids Compatible GPU, i.e [Tesla T4, P4, or P100].

### 4. Install RAPIDS on Colab
You can install RAPIDS using
1. pip
1. conda
#### 4.1. Pip
Checks GPU compatibility with RAPIDS, then installs the latest **stable** versions of RAPIDSAI's core libraries (cuDF, cuML, cuGraph, and xgboost) using `pip`.
```bash
# Colab warns and provides remediation steps if the GPUs is not compatible with RAPIDS.
!git clone https://github.com/rapidsai/rapidsai-csp-utils.git
!python rapidsai-csp-utils/colab/pip-install.py
```
#### 4.2. Conda
If you need to install any RAPIDS Extended libraries or the nightly version, you can use the [RAPIDS Conda Colab Template](https://colab.research.google.com/drive/1TAAi_szMfWqRfHVfjGSqnGVLr_ztzUM9) notebook and install via `conda`.
1. Create and run a cell with the code below to update Colab's gcc. Ignore all Colab alerts.
```bash
!bash rapidsai-csp-utils/colab/update_gcc.sh
import os
os._exit(00)
```
1. Create and run a cell with the code below to install conda on Colab. Ignore all Colab alerts.
```bash
import condacolab
condacolab.install()
```
[Optional] Run the test and conda install test cell.
```bash
import condacolab
condacolab.check()
```
1. Before running the RAPIDS install cell, you can change the installation type between `stable` and `nightly`. All disclaimers around nightly installs apply.
1. Run the rest of the cells to complete the installation of RAPIDS on Colab.
```bash
!python rapidsai-csp-utils/colab/install_rapids.py stable # example runs stable
import os
os.environ['NUMBAPRO_NVVM'] = '/usr/local/cuda/nvvm/lib64/libnvvm.so'
os.environ['NUMBAPRO_LIBDEVICE'] = '/usr/local/cuda/nvvm/libdevice/'
os.environ['CONDA_PREFIX'] = '/usr/local'
```
### 5. Test Rapids
```python
import cudf
gdf = cudf.DataFrame({"a":[1,2,3],"b":[4,5,6]})
gdf
a b
0 1 4
1 2 5
2 3 6
```
### 6. Next steps
Check out this [guide](https://towardsdatascience.com/) for an overview of how to access and work with your own datasets in Colab.
For more RAPIDS examples, check out our RAPIDS [notebooks](https://github.com/rapidsai/notebooks) and [notebooks-contrib](https://github.com/rapidsai/notebooks-contrib) repos
| 0 |
rapidsai_public_repos/deployment/source
|
rapidsai_public_repos/deployment/source/_includes/install-rapids-with-docker.md
|
There are a selection of methods you can use to install RAPIDS which you can see via the [RAPIDS release selector](https://docs.rapids.ai/install#selector).
For this example we are going to run the RAPIDS Docker container so we need to know the name of the most recent container.
On the release selector choose **Docker** in the **Method** column.
Then copy the commands shown:
```bash
docker pull {{ rapids_container }}
docker run --gpus all --rm -it \
--shm-size=1g --ulimit memlock=-1 \
-p 8888:8888 -p 8787:8787 -p 8786:8786 \
{{ rapids_container }}
```
```{note}
If you see a "docker socket permission denied" error while running these commands try closing and reconnecting your
SSH window. This happens because your user was added to the `docker` group only after you signed in.
```
| 0 |
rapidsai_public_repos/deployment/source
|
rapidsai_public_repos/deployment/source/_includes/check-gpu-pod-works.md
|
Let's create a sample pod that uses some GPU compute to make sure that everything is working as expected.
```console
$ cat << EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: cuda-vectoradd
spec:
restartPolicy: OnFailure
containers:
- name: cuda-vectoradd
image: "nvidia/samples:vectoradd-cuda11.2.1"
resources:
limits:
nvidia.com/gpu: 1
EOF
```
```console
$ kubectl logs pod/cuda-vectoradd
[Vector addition of 50000 elements]
Copy input data from the host memory to the CUDA device
CUDA kernel launch with 196 blocks of 256 threads
Copy output data from the CUDA device to the host memory
Test PASSED
Done
```
If you see `Test PASSED` in the output, you can be confident that your Kubernetes cluster has GPU compute set up correctly.
Next, clean up that pod.
```console
$ kubectl delete pod cuda-vectoradd
pod "cuda-vectoradd" deleted
```
| 0 |
rapidsai_public_repos/deployment/source
|
rapidsai_public_repos/deployment/source/_includes/test-rapids-docker-vm.md
|
In the terminal we can open `ipython` and check that we can import and use RAPIDS libraries like `cudf`.
```ipython
In [1]: import cudf
In [2]: df = cudf.datasets.timeseries()
In [3]: df.head()
Out[3]:
id name x y
timestamp
2000-01-01 00:00:00 1020 Kevin 0.091536 0.664482
2000-01-01 00:00:01 974 Frank 0.683788 -0.467281
2000-01-01 00:00:02 1000 Charlie 0.419740 -0.796866
2000-01-01 00:00:03 1019 Edith 0.488411 0.731661
2000-01-01 00:00:04 998 Quinn 0.651381 -0.525398
```
You can also access Jupyter via `<VM ip>:8888` in the browser.
Visit `cudf/10-min.ipynb` and execute the cells to try things out.
When running a Dask cluster you can also visit `<VM ip>:8787` to monitor the Dask cluster status.
| 0 |
rapidsai_public_repos/deployment/source/_includes
|
rapidsai_public_repos/deployment/source/_includes/menus/aws.md
|
`````{grid} 1 2 2 3
:gutter: 2 2 2 2
````{grid-item-card}
:link: /cloud/aws/ec2
:link-type: doc
Elastic Compute Cloud (EC2)
^^^
Launch an EC2 instance and run RAPIDS.
{bdg}`single-node`
````
````{grid-item-card}
:link: /cloud/aws/ec2-multi
:link-type: doc
EC2 Cluster (with Dask)
^^^
Launch a RAPIDS cluster on EC2 with Dask.
{bdg}`multi-node`
````
````{grid-item-card}
:link: /cloud/aws/eks
:link-type: doc
Elastic Kubernetes Service (EKS)
^^^
Launch a RAPIDS cluster on managed Kubernetes.
{bdg}`multi-node`
````
````{grid-item-card}
:link: /cloud/aws/ecs
:link-type: doc
Elastic Container Service (ECS)
^^^
Launch a RAPIDS cluster on managed container service.
{bdg}`multi-node`
````
````{grid-item-card}
:link: /cloud/aws/sagemaker
:link-type: doc
Sagemaker
^^^
Launch the RAPIDS container as a Sagemaker notebook.
{bdg}`single-node`
{bdg}`multi-node`
````
`````
| 0 |
rapidsai_public_repos/deployment/source/_includes
|
rapidsai_public_repos/deployment/source/_includes/menus/azure.md
|
`````{grid} 1 2 2 3
:gutter: 2 2 2 2
````{grid-item-card}
:link: /cloud/azure/azure-vm
:link-type: doc
Azure Virtual Machine
^^^
Launch an Azure VM instance and run RAPIDS.
{bdg}`single-node`
````
````{grid-item-card}
:link: /cloud/azure/aks
:link-type: doc
Azure Kubernetes Service (AKS)
^^^
Launch a RAPIDS cluster on managed Kubernetes.
{bdg}`multi-node`
````
````{grid-item-card}
:link: /cloud/azure/azure-vm-multi
:link-type: doc
Azure Cluster via Dask
^^^
Launch a RAPIDS cluster on Azure VMs or Azure ML with Dask.
{bdg}`multi-node`
````
````{grid-item-card}
:link: /cloud/azure/azureml
:link-type: doc
Azure Machine Learning (Azure ML)
^^^
Launch RAPIDS Experiment on Azure ML.
{bdg}`single-node`
{bdg}`multi-node`
````
`````
| 0 |
rapidsai_public_repos/deployment/source/_includes
|
rapidsai_public_repos/deployment/source/_includes/menus/gcp.md
|
`````{grid} 1 2 2 3
:gutter: 2 2 2 2
````{grid-item-card}
:link: /cloud/gcp/compute-engine
:link-type: doc
Compute Engine Instance
^^^
Launch a Compute Engine instance and run RAPIDS.
{bdg}`single-node`
````
````{grid-item-card}
:link: /cloud/gcp/vertex-ai
:link-type: doc
Vertex AI
^^^
Launch the RAPIDS container in Vertex AI managed notebooks.
{bdg}`single-node`
````
````{grid-item-card}
:link: /cloud/gcp/gke
:link-type: doc
Google Kubernetes Engine (GKE)
^^^
Launch a RAPIDS cluster on managed Kubernetes.
{bdg}`multi-node`
````
````{grid-item-card}
:link: /cloud/gcp/dataproc
:link-type: doc
Dataproc
^^^
Launch a RAPIDS cluster on Dataproc.
{bdg}`multi-node`
````
`````
| 0 |
rapidsai_public_repos/deployment/source/_includes
|
rapidsai_public_repos/deployment/source/_includes/menus/ibm.md
|
`````{grid} 1 2 2 3
:gutter: 2 2 2 2
````{grid-item-card}
:link: /cloud/ibm/virtual-server
:link-type: doc
IBM Virtual Server
^^^
Launch a virtual server and run RAPIDS.
{bdg}`single-node`
````
`````
| 0 |
rapidsai_public_repos/deployment/source/_includes
|
rapidsai_public_repos/deployment/source/_includes/menus/nvidia.md
|
`````{grid} 1 2 2 3
:gutter: 2 2 2 2
````{grid-item-card}
:link: /cloud/nvidia/bcp
:link-type: doc
Base Command Platform
^^^
Run RAPIDS workloads on NVIDIA DGX Cloud with Base Command Platform.
{bdg}`single-node`
{bdg}`multi-node`
````
`````
| 0 |
rapidsai_public_repos/deployment/source
|
rapidsai_public_repos/deployment/source/_templates/notebooks-tag-filter.html
|
<nav class="bd-links" id="bd-docs-nav" aria-label="Section navigation">
<p class="bd-links__title" role="heading" aria-level="1">
Tag filters
<small>(<a href="#" id="resetfilters">reset</a>)</small>
</p>
{% for section in sorted(notebook_tag_tree) %}
<fieldset aria-level="2" class="caption" role="heading">
<legend class="caption-text">
{{ section.title().replace("-", " ") }}
</legend>
{% for tag in sorted(notebook_tag_tree[section]) %}
<div>
<input
type="checkbox"
id="{{section}}/{{ tag }}"
class="tag-filter"
name="{{ tag }}"
/>
<label for="{{ tag }}">
<span class="sd-sphinx-override sd-badge">{{section}}/{{ tag }}</span>
</label>
</div>
{% endfor %}
</fieldset>
{% endfor %}
</nav>
| 0 |
rapidsai_public_repos/deployment/source
|
rapidsai_public_repos/deployment/source/_templates/notebooks-extra-files-nav.html
|
{% if related_notebook_files %} {% macro gen_list(root, dir, related_files) -%}
{{ dir }}
<ul class="visible nav section-nav flex-column">
{% for name in related_files|sort(case_sensitive=False) %}
<li class="toc-h2 nav-item toc-entry">
{% if related_files[name] is mapping %} {{ gen_list(root + name + "/", name
+ "/", related_files[name]) }} {% else %}
<a
class="reference external nav-link"
href="../{{ root }}{{ related_files[name] }}"
>
{{ name }}
</a>
{% endif %}
</li>
{% endfor %}
</ul>
{%- endmacro %}
<div class="tocsection onthispage">
<i class="fa-regular fa-file"></i> Related files
</div>
<nav id="bd-toc-nav" class="page-toc related-files">
{{ gen_list("", "", related_notebook_files) }} {% if
related_notebook_files_archive %}
<div style="font-size: 0.9em; margin-top: 0.5em">
<a href="../{{ related_notebook_files_archive }}"
><i class="fa-solid fa-download"></i> Download all
</a>
</div>
{% endif %}
</nav>
{% endif %}
| 0 |
rapidsai_public_repos/deployment/source
|
rapidsai_public_repos/deployment/source/_templates/notebooks-tags.html
|
{% if notebook_tags %}
<div class="tocsection onthispage"><i class="fa-solid fa-tags"></i> Tags</div>
<nav id="bd-toc-nav" class="page-toc">
<div class="tagwrapper">
{% for tag in notebook_tags %}
<a href="../../?filters={{ tag }}">
<span class="sd-sphinx-override sd-badge">{{ tag }}</span>
</a>
{% endfor %}
</div>
</nav>
{% endif %}
| 0 |
rapidsai_public_repos/deployment/source
|
rapidsai_public_repos/deployment/source/cloud/index.md
|
---
html_theme.sidebar_secondary.remove: true
---
# Cloud
## NVIDIA DGX Cloud
```{include} ../_includes/menus/nvidia.md
```
## Amazon Web Services
```{include} ../_includes/menus/aws.md
```
## Microsoft Azure
```{include} ../_includes/menus/azure.md
```
## Google Cloud Platform
```{include} ../_includes/menus/gcp.md
```
## IBM Cloud
```{include} ../_includes/menus/ibm.md
```
```{toctree}
:maxdepth: 2
:caption: Cloud
:hidden:
nvidia/index
aws/index
azure/index
gcp/index
ibm/index
```
| 0 |
rapidsai_public_repos/deployment/source/cloud
|
rapidsai_public_repos/deployment/source/cloud/ibm/virtual-server.md
|
# Virtual Server for VPC
## Create Instance
Create a new [Virtual Server (for VPC)](https://www.ibm.com/cloud/virtual-servers) with GPUs, the [NVIDIA Driver](https://www.nvidia.co.uk/Download/index.aspx) and the [NVIDIA Container Runtime](https://developer.nvidia.com/nvidia-container-runtime).
1. Open the [**Virtual Server Dashboard**](https://cloud.ibm.com/vpc-ext/compute/vs).
1. Select **Create**.
1. Give the server a **name** and select your **resource group**.
1. Under **Operating System** choose **Ubuntu Linux**.
1. Under **Profile** select **View all profiles** and select a profile with NVIDIA GPUs.
1. Under **SSH Keys** choose your SSH key.
1. Under network settings create a security group (or choose an existing) that allows SSH access on port `22` and also allow ports `8888,8786,8787` to access Jupyter and Dask.
1. Select **Create Virtual Server**.
## Create floating IP
To access the virtual server we need to attach a public IP address.
1. Open [**Floating IPs**](https://cloud.ibm.com/vpc-ext/network/floatingIPs)
1. Select **Reserve**.
1. Give the Floating IP a **name**.
1. Under **Resource to bind** select the virtual server you just created.
## Connect to the instance
Next we need to connect to the instance.
1. Open [**Floating IPs**](https://cloud.ibm.com/vpc-ext/network/floatingIPs)
1. Locate the IP you just created and note the address.
1. In your terminal run `ssh root@<ip address>`
```{note}
For a short guide on launching your instance and accessing it, read the
[Getting Started with IBM Virtual Server Documentation](https://cloud.ibm.com/docs/virtual-servers?topic=virtual-servers-getting-started-tutorial).
```
## Install NVIDIA Drivers
Next we need to install the NVIDIA drivers and container runtime.
1. Ensure build essentials are installed `apt-get update && apt-get install build-essential -y`.
1. Install the [NVIDIA drivers](https://www.nvidia.com/Download/index.aspx?lang=en-us).
1. Install [Docker and the NVIDIA Docker runtime](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html).
````{dropdown} How do I check everything installed successfully?
:color: info
:icon: info
You can check everything installed correctly by running `nvidia-smi` in a container.
```console
$ docker run --rm --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.108.03 Driver Version: 510.108.03 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100-PCIE... Off | 00000000:04:01.0 Off | 0 |
| N/A 33C P0 36W / 250W | 0MiB / 16384MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
```
````
## Install RAPIDS
```{include} ../../_includes/install-rapids-with-docker.md
```
## Test RAPIDS
```{include} ../../_includes/test-rapids-docker-vm.md
```
```{relatedexamples}
```
| 0 |
rapidsai_public_repos/deployment/source/cloud
|
rapidsai_public_repos/deployment/source/cloud/ibm/index.md
|
---
html_theme.sidebar_secondary.remove: true
---
# IBM Cloud
```{include} ../../_includes/menus/ibm.md
```
RAPIDS can be deployed on IBM Cloud in several ways. See the
list of accelerated instance types below:
| Cloud <br> Provider | Inst. <br> Type | vCPUs | Inst. <br> Name | GPU <br> Count | GPU <br> Type | xGPU <br> RAM | xGPU <br> RAM Total |
| :------------------ | --------------------- | ----- | ------------------ | -------------- | ------------- | ------------- | ------------------: |
| IBM | V100 GPU Virtual | 8 | gx2-8x64x1v100 | 1 | NVIDIA Tesla | 16 (GB) | 64 (GB) |
| IBM | V100 GPU Virtual | 16 | gx2-16x128x1v100 | 1 | NVIDIA Tesla | 16 (GB) | 128 (GB) |
| IBM | V100 GPU Virtual | 16 | gx2-16x128x2v100 | 2 | NVIDIA Tesla | 16 (GB) | 128 (GB) |
| IBM | V100 GPU Virtual | 32 | gx2-32x256x2v100 | 2 | NVIDIA Tesla | 16 (GB) | 256 (GB) |
| IBM | P100 GPU Bare Metal\* | 32 | mg4c.32x384.2xp100 | 2 | NVIDIA Tesla | 16 (GB) | 384 (GB) |
| IBM | V100 GPU Bare Metal\* | 48 | mg4c.48x384.2xv100 | 2 | NVIDIA Tesla | 16 (GB) | 384 (GB) |
```{warning}
*Bare Metal instances are billed in monthly intervals rather than hourly intervals.
```
```{toctree}
---
hidden: true
---
virtual-server
```
| 0 |
rapidsai_public_repos/deployment/source/cloud
|
rapidsai_public_repos/deployment/source/cloud/gcp/compute-engine.md
|
# Compute Engine Instance
## Create Virtual Machine
Create a new [Compute Engine Instance](https://cloud.google.com/compute/docs/instances) with GPUs, the [NVIDIA Driver](https://www.nvidia.co.uk/Download/index.aspx) and the [NVIDIA Container Runtime](https://developer.nvidia.com/nvidia-container-runtime).
NVIDIA maintains a [Virtual Machine Image (VMI) that pre-installs NVIDIA drivers and container runtimes](https://console.cloud.google.com/marketplace/product/nvidia-ngc-public/nvidia-gpu-optimized-vmi), we recommend using this image.
1. Open [**Compute Engine**](https://console.cloud.google.com/compute/instances).
1. Select **Create Instance**.
1. Select **Marketplace**.
1. Search for "nvidia" and select **NVIDIA GPU-Optimized VMI**, then select **Launch**.
1. In the **New NVIDIA GPU-Optimized VMI deployment** interface, fill in the name and any required information for the vm (the defaults should be fine for most users).
1. **Read and accept** the Terms of Service
1. Select **Deploy** to start the virtual machine.
## Allow network access
To access Jupyter and Dask we will need to set up some firewall rules to open up some ports.
### Create the firewall rule
1. Open [**VPC Network**](https://console.cloud.google.com/networking/networks/list).
2. Select **Firewall** and **Create firewall rule**
3. Give the rule a name like `rapids` and ensure the network matches the one you selected for the VM.
4. Add a tag like `rapids` which we will use to assign the rule to our VM.
5. Set your source IP range. We recommend you restrict this to your own IP address or your corporate network rather than `0.0.0.0/0` which will allow anyone to access your VM.
6. Under **Protocols and ports** allow TCP connections on ports `8786,8787,8888`.
### Assign it to the VM
1. Open [**Compute Engine**](https://console.cloud.google.com/compute/instances).
2. Select your VM and press **Edit**.
3. Scroll down to **Networking** and add the `rapids` network tag you gave your firewall rule.
4. Select **Save**.
## Connect to the VM
Next we need to connect to the VM.
1. Open [**Compute Engine**](https://console.cloud.google.com/compute/instances).
2. Locate your VM and press the **SSH** button which will open a new browser tab with a terminal.
3. **Read and accept** the NVIDIA installer prompts.
## Install RAPIDS
```{include} ../../_includes/install-rapids-with-docker.md
```
## Test RAPIDS
```{include} ../../_includes/test-rapids-docker-vm.md
```
## Clean up
Once you are finished head back to the [Deployments](https://console.cloud.google.com/dm/deployments) page and delete the marketplace deployment you created.
```{relatedexamples}
```
| 0 |
rapidsai_public_repos/deployment/source/cloud
|
rapidsai_public_repos/deployment/source/cloud/gcp/dataproc.md
|
# Dataproc
RAPIDS can be deployed on Google Cloud Dataproc using Dask. For more details, see our **[detailed instructions and helper scripts.](https://github.com/GoogleCloudDataproc/initialization-actions/tree/master/rapids)**
**0. Copy initialization actions to your own Cloud Storage bucket.** Don't create clusters that reference initialization actions located in `gs://goog-dataproc-initialization-actions-REGION` public buckets. These scripts are provided as reference implementations and are synchronized with ongoing [GitHub repository](https://github.com/GoogleCloudDataproc/initialization-actions) changes.
It is strongly recommended that you copy the initialization scripts into your own Storage bucket to prevent unintended upgrades from upstream in the cluster:
```console
$ REGION=<region>
$ GCS_BUCKET=<bucket_name>
$ gcloud storage buckets create gs://$GCS_BUCKET
$ gsutil cp gs://goog-dataproc-initialization-actions-${REGION}/gpu/install_gpu_driver.sh gs://$GCS_BUCKET
$ gsutil cp gs://goog-dataproc-initialization-actions-${REGION}/dask/dask.sh gs://$GCS_BUCKET
$ gsutil cp gs://goog-dataproc-initialization-actions-${REGION}/rapids/rapids.sh gs://$GCS_BUCKET
```
**1. Create Dataproc cluster with Dask RAPIDS.** Use the gcloud command to create a new cluster. Because of an Anaconda version conflict, script deployment on older images is slow, we recommend using Dask with Dataproc 2.0+.
```console
$ CLUSTER_NAME=<CLUSTER_NAME>
$ DASK_RUNTIME=yarn
$ gcloud dataproc clusters create $CLUSTER_NAME\
--region $REGION\
--image-version 2.0-ubuntu18\
--master-machine-type n1-standard-32\
--master-accelerator type=nvidia-tesla-t4,count=2\
--worker-machine-type n1-standard-32\
--worker-accelerator type=nvidia-tesla-t4,count=2\
--initialization-actions=gs://$GCS_BUCKET/install_gpu_driver.sh,gs://$GCS_BUCKET/dask.sh,gs://$GCS_BUCKET/rapids.sh\
--initialization-action-timeout 60m\
--optional-components=JUPYTER\
--metadata gpu-driver-provider=NVIDIA,dask-runtime=$DASK_RUNTIME,rapids-runtime=DASK\
--enable-component-gateway
```
[GCS_BUCKET] = name of the bucket to use.\
[CLUSTER_NAME] = name of the cluster.\
[REGION] = name of region where cluster is to be created.\
[DASK_RUNTIME] = Dask runtime could be set to either yarn or standalone.
**2. Run Dask RAPIDS Workload.** Once the cluster has been created, the Dask scheduler listens for workers on `port 8786`, and its status dashboard is on `port 8787` on the Dataproc master node.
To connect to the Dask web interface, you will need to create an SSH tunnel as described in the [Dataproc web interfaces documentation.](https://cloud.google.com/dataproc/docs/concepts/accessing/cluster-web-interfaces) You can also connect using the Dask Client Python API from a Jupyter notebook, or from a Python script or interpreter session.
```{relatedexamples}
```
| 0 |
rapidsai_public_repos/deployment/source/cloud
|
rapidsai_public_repos/deployment/source/cloud/gcp/gke.md
|
# Google Kubernetes Engine
RAPIDS can be deployed on Google Cloud via the [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine) (GKE).
To run RAPIDS you'll need a Kubernetes cluster with GPUs available.
## Prerequisites
First you'll need to have the [`gcloud` CLI tool](https://cloud.google.com/sdk/gcloud) installed along with [`kubectl`](https://kubernetes.io/docs/tasks/tools/), [`helm`](https://helm.sh/docs/intro/install/), etc for managing Kubernetes.
Ensure you are logged into the `gcloud` CLI.
```console
$ gcloud init
```
## Create the Kubernetes cluster
Now we can launch a GPU enabled GKE cluster.
```console
$ gcloud container clusters create rapids-gpu-kubeflow \
--accelerator type=nvidia-tesla-a100,count=2 --machine-type a2-highgpu-2g \
--zone us-central1-c --release-channel stable
```
With this command, you’ve launched a GKE cluster called `rapids-gpu-kubeflow`. You’ve specified that it should use nodes of type a2-highgpu-2g, each with two A100 GPUs.
## Install drivers
Next, [install the NVIDIA drivers](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#installing_drivers) onto each node.
```console
$ kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded-latest.yaml
daemonset.apps/nvidia-driver-installer created
```
Verify that the NVIDIA drivers are successfully installed.
```console
$ kubectl get po -A --watch | grep nvidia
kube-system nvidia-driver-installer-6zwcn 1/1 Running 0 8m47s
kube-system nvidia-driver-installer-8zmmn 1/1 Running 0 8m47s
kube-system nvidia-driver-installer-mjkb8 1/1 Running 0 8m47s
kube-system nvidia-gpu-device-plugin-5ffkm 1/1 Running 0 13m
kube-system nvidia-gpu-device-plugin-d599s 1/1 Running 0 13m
kube-system nvidia-gpu-device-plugin-jrgjh 1/1 Running 0 13m
```
After your drivers are installed, you are ready to test your cluster.
```{include} ../../_includes/check-gpu-pod-works.md
```
## Install RAPIDS
Now that you have a GPU enables Kubernetes cluster on GKE you can install RAPIDS with [any of the supported methods](../../platforms/kubernetes).
## Clean up
You can also delete the GKE cluster to stop billing with the following command.
```console
$ gcloud container clusters delete rapids-gpu-kubeflow --zone us-central1-c
Deleting cluster rapids...⠼
```
```{relatedexamples}
```
| 0 |
rapidsai_public_repos/deployment/source/cloud
|
rapidsai_public_repos/deployment/source/cloud/gcp/index.md
|
---
html_theme.sidebar_secondary.remove: true
---
# Google Cloud Platform
```{include} ../../_includes/menus/gcp.md
```
RAPIDS can be deployed on Google Cloud Platform in several ways. Google Cloud supports various kinds of GPU VMs for different needs. Please visit the Google Cloud documentation for [an overview of GPU VM sizes](https://cloud.google.com/compute/docs/gpus) and [GPU VM availability by region](https://cloud.google.com/compute/docs/gpus/gpu-regions-zones).
```{toctree}
---
hidden: true
---
compute-engine
vertex-ai
gke
dataproc
```
| 0 |
rapidsai_public_repos/deployment/source/cloud
|
rapidsai_public_repos/deployment/source/cloud/gcp/vertex-ai.md
|
# Vertex AI
RAPIDS can be deployed on [Vertex AI Workbench](https://cloud.google.com/vertex-ai-workbench).
For new, user-managed notebooks, it is recommended to use a RAPIDS docker image to access the latest RAPIDS software.
## Prepare RAPIDS Docker Image
Before configuring a new notebook, the [RAPIDS Docker image](/tools/rapids-docker) will need to be built to expose port 8080 to be used as a notebook service.
```dockerfile
FROM {{ rapids_container }}
EXPOSE 8080
ENTRYPOINT ["jupyter-lab", "--allow-root", "--ip=0.0.0.0", "--port=8080", "--no-browser", "--NotebookApp.token=''", "--NotebookApp.allow_origin='*'"]
```
Once you have built this image, it needs to be pushed to [Google Container Registry](https://cloud.google.com/container-registry/docs/pushing-and-pulling) for Vertex AI to access.
```console
$ docker build -t gcr.io/<project>/<folder>/{{ rapids_container.replace('rapidsai/', '') }} .
$ docker push gcr.io/<project>/<folder>/{{ rapids_container.replace('rapidsai/', '') }}
```
## Create a New Notebook
1. From the Google Cloud UI, navigate to [**Vertex AI**](https://console.cloud.google.com/vertex-ai) -> **Dashboard** and select **+ CREATE NOTEBOOK INSTANCE**.
2. In the **Details** section, under the **Workbench type** heading select **Managed Notebook** from the drop down menu.
3. Under the **Environment** section, select **Provide custom docker images**, and in the input field below, select the `gcr.io` path to your pushed RAPIDS Docker image.
4. Under the **Machine type** section select an NVIDIA GPU.
5. Check the **Install NVIDIA GPU Driver** option.
6. After customizing any other aspects of the machine you wish, click **CREATE**.
## Test RAPIDS
Once the managed notebook is fully configured, you can click **OPEN JUPYTERLAB** to navigate to another tab running JupyterLab.
```{warning}
You should see a popup letting you know it is loading the RAPIDS kernel, this can take a long time so please be patient.
```
Once the kernel is loaded you can launch a notebook with the `rapids` kernel to use the latest version of RAPIDS with Vertex AI.
For example we could import and use RAPIDS libraries like `cudf`.
```ipython
In [1]: import cudf
In [2]: df = cudf.datasets.timeseries()
In [3]: df.head()
Out[3]:
id name x y
timestamp
2000-01-01 00:00:00 1020 Kevin 0.091536 0.664482
2000-01-01 00:00:01 974 Frank 0.683788 -0.467281
2000-01-01 00:00:02 1000 Charlie 0.419740 -0.796866
2000-01-01 00:00:03 1019 Edith 0.488411 0.731661
2000-01-01 00:00:04 998 Quinn 0.651381 -0.525398
```
```{relatedexamples}
```
| 0 |
rapidsai_public_repos/deployment/source/cloud
|
rapidsai_public_repos/deployment/source/cloud/azure/azureml.md
|
# Azure Machine Learning
RAPIDS can be deployed at scale using [Azure Machine Learning Service](https://learn.microsoft.com/en-us/azure/machine-learning/overview-what-is-azure-machine-learning) and easily scales up to any size needed.
## Pre-requisites
Use existing or create new Azure Machine Learning workspace through the [Azure portal](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspace?tabs=azure-portal#create-a-workspace), [Azure ML Python SDK](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspace?tabs=python#create-a-workspace), [Azure CLI](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-manage-workspace-cli?tabs=createnewresources) or [Azure Resource Manager templates](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-create-workspace-template?tabs=azcli).
Follow these high-level steps to get started:
**1. Create.** Create your Azure Resource Group.
**2. Workspace.** Within the Resource Group, create an Azure Machine Learning service Workspace.
**3. Config.** Within the Workspace, download the `config.json` file, as you will load the details to initialize workspace for running ML training jobs from within your notebook.

**4. Quota.** Check your Usage + Quota to ensure you have enough quota within your region to launch your desired cluster size.
## Azure ML Compute instance
Although it is possible to install Azure Machine Learning on your local computer, it is recommended to utilize [Azure's ML Compute instances](https://learn.microsoft.com/en-us/azure/machine-learning/concept-compute-instance), fully managed and secure development environments that can also serve as a [compute target](https://learn.microsoft.com/en-us/azure/machine-learning/concept-compute-target?view=azureml-api-2) for ML training.
The compute instance provides an integrated Jupyter notebook service, JupyterLab, Azure ML Python SDK, CLI, and other essential [tools](https://learn.microsoft.com/en-us/azure/machine-learning/concept-compute-target?view=azureml-api-2).
### Select your instance
Sign in to [Azure Machine Learning Studio](https://ml.azure.com/) and navigate to your workspace on the left-side menu.
Select **Compute** > **+ New** > choose a [RAPIDS compatible GPU](https://medium.com/dropout-analytics/which-gpus-work-with-rapids-ai-f562ef29c75f) VM size (e.g., `Standard_NC12s_v3`)

### Provision RAPIDS setup script
Create a new "startup script" via the **Advanced Settings** dropdown to install RAPIDS and dependencies. You can upload the script from your Notebooks files or local computer.
Optional to enable SSH access to your compute (if needed).

Refer to [Azure ML documentation](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-customize-compute-instance) for more details on how to create the setup script but it should resemble:
```bash
#!/bin/bash
sudo -u azureuser -i <<'EOF'
conda create -y -n rapids {{ rapids_conda_channels }} {{ rapids_conda_packages }} ipykernel
conda activate rapids
# install Python SDK v2 in rapids env
python -m pip install azure-ai-ml azure-identity
# optionally install AutoGluon for AutoML GPU demo
# python -m pip install --pre autogluon
python -m ipykernel install --user --name rapids
echo "kernel install completed"
EOF
```
Launch the instance.
### Select the RAPIDS environment
Once your Notebook Instance is `Running`, open "JupyterLab" and select the `rapids` kernel when working with a new notebook.
## Azure ML Compute cluster
Launch Azure's [ML Compute cluster](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-create-attach-compute-cluster?tabs=python) to distribute your RAPIDS training jobs across a cluster of single or multi-GPU compute nodes.
The Compute cluster scales up automatically when a job is submitted, and executes in a containerized environment, packaging your model dependencies in a Docker container.
### Instantiate workspace
If using the Python SDK, connect to your workspace either by explicitly providing the workspace details or load from the `config.json` file downloaded in the pre-requisites section.
```python
from azure.ai.ml import MLClient
from azure.identity import DefaultAzureCredential
# Get a handle to the workspace
ml_client = MLClient(
credential=DefaultAzureCredential(),
subscription_id="<SUBSCRIPTION_ID>",
resource_group_name="<RESOURCE_GROUP>",
workspace_name="<AML_WORKSPACE_NAME>",
)
# or load details from config file
ml_client = MLClient.from_config(
credential=DefaultAzureCredential(),
path="config.json",
)
```
### Create AMLCompute
You will need to create a [compute target](https://learn.microsoft.com/en-us/azure/machine-learning/concept-compute-target?view=azureml-api-2#azure-machine-learning-compute-managed) using Azure ML managed compute ([AmlCompute](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-ml/0.1.0b4/azure.ai.ml.entities.html)) for remote training. Note: Be sure to check limits within your available region. This [article](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-manage-quotas?view=azureml-api-2#azure-machine-learning-compute) includes details on the default limits and how to request more quota.
[**size**]: The VM family of the nodes.
Specify from one of **NC_v2**, **NC_v3**, **ND** or **ND_v2** GPU virtual machines (e.g `Standard_NC12s_v3`)
[**max_instances**]: The max number of nodes to autoscale up to when you run a job
```{note}
You may choose to use low-priority VMs to run your workloads. These VMs don't have guaranteed availability but allow you to take advantage of Azure's unused capacity at a significant cost savings. The amount of available capacity can vary based on size, region, time of day, and more.
```
```python
from azure.ai.ml.entities import AmlCompute
gpu_compute = AmlCompute(
name="rapids-cluster",
type="amlcompute",
size="Standard_NC12s_v3",
max_instances=3,
idle_time_before_scale_down=300, # Seconds of idle time before scaling down
tier="low_priority", # optional
)
ml_client.begin_create_or_update(gpu_compute).result()
```
### Access Datastore URI
A [datastore URI](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-access-data-interactive?tabs=adls&view=azureml-api-2#access-data-from-a-datastore-uri-like-a-filesystem-preview) is a reference to a blob storage location (path) on your Azure account. You can copy-and-paste the datastore URI from the AzureML Studio UI:
1. Select **Data** from the left-hand menu > **Datastores** > choose your datastore name > **Browse**
2. Find the file/folder containing your dataset and click the elipsis (...) next to it.
3. From the menu, choose **Copy URI** and select **Datastore URI** format to copy into your notebook.

### Custom RAPIDS Environment
To run an AzureML experiment, you must specify an [environment](https://learn.microsoft.com/en-us/azure/machine-learning/concept-environments?view=azureml-api-2) that contains all the necessary software dependencies to run the training script on distributed nodes. <br>
You can define an environment from a [pre-built](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-manage-environments-v2?tabs=python&view=azureml-api-2#create-an-environment-from-a-docker-image) docker image or create-your-own from a [Dockerfile](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-manage-environments-v2?tabs=python&view=azureml-api-2#create-an-environment-from-a-docker-build-context) or [conda](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-manage-environments-v2?tabs=python&view=azureml-api-2#create-an-environment-from-a-conda-specification) specification file.
Create your custom RAPIDS docker image using the example below, making sure to install additional packages needed for your workflows.
```dockerfile
# Use latest rapids image with the necessary dependencies
FROM {{ rapids_container }}
# Update and/or install required packages
RUN apt-get update && \
apt-get install -y --no-install-recommends build-essential fuse && \
rm -rf /var/lib/apt/lists/*
# Activate rapids conda environment
RUN /bin/bash -c "source activate rapids && pip install azureml-mlflow"
```
Now create the Environment, making sure to label and provide a description:
```python
from azure.ai.ml.entities import Environment, BuildContext
env_docker_image = Environment(
build=BuildContext(path="Dockerfile"),
name="rapids-mlflow",
description="RAPIDS environment with azureml-mlflow",
)
ml_client.environments.create_or_update(env_docker_image)
```
### Submit RAPIDS Training jobs
Now that we have our environment and custom logic, we can configure and run the `command` [class](https://learn.microsoft.com/en-us/python/api/azure-ai-ml/azure.ai.ml?view=azure-python#azure-ai-ml-command) to submit training jobs. `inputs` is a dictionary of command-line arguments to pass to the training script.
```python
from azure.ai.ml import command, Input
from azure.ai.ml.sweep import Choice, Uniform
command_job = command(
environment="rapids-mlflow:1", # specify version of environment to use
experiment_name="test_rapids_mlflow",
code=project_folder,
command="python train_rapids.py --data_dir ${{inputs.data_dir}} \
--n_bins ${{inputs.n_bins}} \
--cv_folds ${{inputs.cv_folds}} \
--n_estimators ${{inputs.n_estimators}} \
--max_depth ${{inputs.max_depth}} \
--max_features ${{inputs.max_features}}",
inputs={
"data_dir": Input(type="uri_file", path=data_uri),
"n_bins": 32,
"cv_folds": 5,
"n_estimators": 50,
"max_depth": 10,
"max_features": 1.0,
},
compute="rapids-cluster",
)
returned_job = ml_client.jobs.create_or_update(command_job) # submit training job
# define hyperparameter space to sweep over
command_job_for_sweep = command_job(
n_estimators=Choice(values=range(50, 500)),
max_depth=Choice(values=range(5, 19)),
max_features=Uniform(min_value=0.2, max_value=1.0),
)
# apply hyperparameter sweep_job
sweep_job = command_job_for_sweep.sweep(
compute="rapids-cluster",
sampling_algorithm="random",
primary_metric="Accuracy",
goal="Maximize",
)
returned_sweep_job = ml_client.create_or_update(sweep_job) # submit hpo job
```
### CleanUp
```python
# Delete compute cluster
ml_client.compute.begin_delete(gpu_compute.name).wait()
```
```{relatedexamples}
```
| 0 |
rapidsai_public_repos/deployment/source/cloud
|
rapidsai_public_repos/deployment/source/cloud/azure/azure-vm-multi.md
|
# Azure VM Cluster (via Dask)
## Create a Cluster using Dask Cloud Provider
The easiest way to setup a multi-node, multi-GPU cluster on Azure is to use [Dask Cloud Provider](https://cloudprovider.dask.org/en/latest/azure.html).
### 1. Install Dask Cloud Provider
Dask Cloud Provider can be installed via `conda` or `pip`. The Azure-specific capabilities will need to be installed via the `[azure]` pip extra.
```shell
$ pip install dask-cloudprovider[azure]
```
### 2. Configure your Azure Resources
Set up your [Azure Resouce Group](https://cloudprovider.dask.org/en/latest/azure.html#resource-groups), [Virtual Network](https://cloudprovider.dask.org/en/latest/azure.html#virtual-networks), and [Security Group](https://cloudprovider.dask.org/en/latest/azure.html#security-groups) according to [Dask Cloud Provider instructions](https://cloudprovider.dask.org/en/latest/azure.html#authentication).
### 3. Create a Cluster
In Python terminal, a cluster can be created using the `dask_cloudprovider` package. The below example creates a cluster with 2 workers in `westus2` with `Standard_NC12s_v3` VMs. The VMs should have at least 100GB of disk space in order to accommodate the RAPIDS container image and related dependencies.
```python
from dask_cloudprovider.azure import AzureVMCluster
resource_group = "<RESOURCE_GROUP>"
vnet = "<VNET>"
security_group = "<SECURITY_GROUP>"
subscription_id = "<SUBSCRIPTION_ID>"
cluster = AzureVMCluster(
resource_group=resource_group,
vnet=vnet,
security_group=security_group,
subscription_id=subscription_id,
location="westus2",
vm_size="Standard_NC12s_v3",
public_ingress=True,
disk_size=100,
n_workers=2,
worker_class="dask_cuda.CUDAWorker",
docker_image="{{rapids_container}}",
docker_args="-p 8787:8787 -p 8786:8786",
)
```
### 4. Test RAPIDS
To test RAPIDS, create a distributed client for the cluster and query for the GPU model.
```python
from dask.distributed import Client
client = Client(cluster)
def get_gpu_model():
import pynvml
pynvml.nvmlInit()
return pynvml.nvmlDeviceGetName(pynvml.nvmlDeviceGetHandleByIndex(0))
client.submit(get_gpu_model).result()
```
```shell
Out[5]: b'Tesla V100-PCIE-16GB'
```
### 5. Cleanup
Once done with the cluster, ensure the `cluster` and `client` are closed:
```python
client.close()
cluster.close()
```
```{relatedexamples}
```
| 0 |
rapidsai_public_repos/deployment/source/cloud
|
rapidsai_public_repos/deployment/source/cloud/azure/azure-vm.md
|
# Azure Virtual Machine
## Create Virtual Machine
Create a new [Azure Virtual Machine](https://azure.microsoft.com/en-gb/products/virtual-machines/) with GPUs, the [NVIDIA Driver](https://www.nvidia.co.uk/Download/index.aspx) and the [NVIDIA Container Runtime](https://developer.nvidia.com/nvidia-container-runtime).
NVIDIA maintains a [Virtual Machine Image (VMI) that pre-installs NVIDIA drivers and container runtimes](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nvidia.ngc_azure_17_11?tab=Overview), we recommend using this image as the starting point.
`````{tab-set}
````{tab-item} via Azure Portal
:sync: portal
1. Select the latest **NVIDIA GPU-Optimized VMI** version from the drop down list, then select **Get It Now**.
2. If already logged in on Azure, select continue clicking **Create**.
3. In **Create a virtual machine** interface, fill in required information for the vm.
Select a GPU enabled VM size.
```{dropdown} Note that not all regions support availability zones with GPU VMs.
:color: info
:icon: info
When the GPU VM size is not selectable
with notice: **The size is not available in zone x. No zones are supported.** It means the GPU VM does not
support availability zone. Try other availability options.

```
Click **Review+Create** to start the virtual machine.
````
````{tab-item} via Azure CLI
:sync: cli
Prepare the following environment variables.
| Name | Description | Example |
| ------------------ | -------------------- | -------------------------------------------------------------- |
| `AZ_VMNAME` | Name for VM | `RapidsAI-V100` |
| `AZ_RESOURCEGROUP` | Resource group of VM | `rapidsai-deployment` |
| `AZ_LOCATION` | Region of VM | `westus2` |
| `AZ_IMAGE` | URN of image | `nvidia:ngc_azure_17_11:ngc-base-version-22_06_0-gen2:22.06.0` |
| `AZ_SIZE` | VM Size | `Standard_NC6s_v3` |
| `AZ_USERNAME` | User name of VM | `rapidsai` |
| `AZ_SSH_KEY` | public ssh key | `~/.ssh/id_rsa.pub` |
```bash
az vm create \
--name ${AZ_VMNAME} \
--resource-group ${AZ_RESOURCEGROUP} \
--image ${AZ_IMAGE} \
--location ${AZ_LOCATION} \
--size ${AZ_SIZE} \
--admin-username ${AZ_USERNAME} \
--ssh-key-value ${AZ_SSH_KEY}
```
```{note}
Use `az vm image list --publisher Nvidia --all --output table` to inspect URNs of official
NVIDIA images on Azure.
```
```{note}
See [this link](https://learn.microsoft.com/en-us/azure/virtual-machines/linux/mac-create-ssh-keys)
for supported ssh keys on Azure.
```
````
`````
## Create Network Security Group
Next we need to allow network traffic to the VM so we can access Jupyter and Dask.
`````{tab-set}
````{tab-item} via Azure Portal
:sync: portal
1. Select **Networking** in the left panel.
2. Select **Add inbound port rule**.
3. Set **Destination port ranges** to `8888,8787`. Keep rest unchanged. Select **Add**.
````
````{tab-item} via Azure CLI
:sync: cli
| Name | Description | Example |
| ---------------- | ------------------- | -------------------------- |
| `AZ_NSGNAME` | NSG name for the VM | `${AZ_VMNAME}NSG` |
| `AZ_NSGRULENAME` | Name for NSG rule | `Allow-Dask-Jupyter-ports` |
```bash
az network nsg rule create \
-g ${AZ_RESOURCEGROUP} \
--nsg-name ${AZ_NSGNAME} \
-n ${AZ_NSGRULENAME} \
--priority 1050 \
--destination-port-ranges 8888 8787
```
````
`````
## Install RAPIDS
Next, we can SSH into our VM to install RAPIDS. SSH instructions can be found by selecting **Connect** in the left panel.
```{include} ../../_includes/install-rapids-with-docker.md
```
## Test RAPIDS
```{include} ../../_includes/test-rapids-docker-vm.md
```
### Useful Links
- [Using NGC with Azure](https://docs.nvidia.com/ngc/ngc-azure-setup-guide/index.html)
```{relatedexamples}
```
| 0 |
rapidsai_public_repos/deployment/source/cloud
|
rapidsai_public_repos/deployment/source/cloud/azure/aks.md
|
# Azure Kubernetes Service
RAPIDS can be deployed on Azure via the [Azure Kubernetes Service](https://azure.microsoft.com/en-us/products/kubernetes-service/) (AKS).
To run RAPIDS you'll need a Kubernetes cluster with GPUs available.
## Prerequisites
First you'll need to have the [`az` CLI tool](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli) installed along with [`kubectl`](https://kubernetes.io/docs/tasks/tools/), [`helm`](https://helm.sh/docs/intro/install/), etc for managing Kubernetes.
Ensure you are logged into the `az` CLI.
```console
$ az login
```
## Create the Kubernetes cluster
Now we can launch a GPU enabled AKS cluster. First launch an AKS cluster.
```console
$ az aks create -g <resource group> -n rapids \
--enable-managed-identity \
--node-count 1 \
--enable-addons monitoring \
--enable-msi-auth-for-monitoring \
--generate-ssh-keys
```
Once the cluster has created we need to pull the credentials into our local config.
```console
$ az aks get-credentials -g <resource group> --name rapids
Merged "rapids" as current context in ~/.kube/config
```
Next we need to add an additional node group with GPUs which you can [learn more about in the Azure docs](https://learn.microsoft.com/en-us/azure/aks/gpu-cluster).
`````{note}
You will need the `GPUDedicatedVHDPreview` feature enabled so that NVIDIA drivers are installed automatically.
You can check if this is enabled with:
````console
$ az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/GPUDedicatedVHDPreview')].{Name:name,State:properties.state}"
Name State
------------------------------------------------- -------------
Microsoft.ContainerService/GPUDedicatedVHDPreview NotRegistered
````
````{dropdown} If you see NotRegistered follow these instructions
:color: info
:icon: info
If it is not registered for you you'll need to register it which can take a few minutes.
```console
$ az feature register --name GPUDedicatedVHDPreview --namespace Microsoft.ContainerService
Once the feature 'GPUDedicatedVHDPreview' is registered, invoking 'az provider register -n Microsoft.ContainerService' is required to get the change propagated
Name
-------------------------------------------------
Microsoft.ContainerService/GPUDedicatedVHDPreview
```
Keep checking until it does into a registered state.
```console
$ az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/GPUDedicatedVHDPreview')].{Name:name,State:properties.state}"
Name State
------------------------------------------------- -----------
Microsoft.ContainerService/GPUDedicatedVHDPreview Registered
```
When the status shows as registered, refresh the registration of the `Microsoft.ContainerService` resource provider by using the `az provider register` command:
```console
$ az provider register --namespace Microsoft.ContainerService
```
Then install the aks-preview CLI extension, use the following Azure CLI commands:
```console
$ az extension add --name aks-preview
```
````
`````
```console
$ az aks nodepool add \
--resource-group <resource group> \
--cluster-name rapids \
--name gpunp \
--node-count 1 \
--node-vm-size Standard_NC48ads_A100_v4 \
--aks-custom-headers UseGPUDedicatedVHD=true \
--enable-cluster-autoscaler \
--min-count 1 \
--max-count 3
```
Here we have added a new pool made up of `Standard_NC48ads_A100_v4` instances which each have two A100 GPUs. We've also enabled autoscaling between one and three nodes on the pool. Once our new pool has been created, we can test the cluster.
```{include} ../../_includes/check-gpu-pod-works.md
```
we should be able to test that we can schedule GPU pods.
## Install RAPIDS
Now that you have a GPU enables Kubernetes cluster on AKS you can install RAPIDS with [any of the supported methods](../../platforms/kubernetes).
## Clean up
You can also delete the AKS cluster to stop billing with the following command.
```console
$ az aks delete -g <resource group> -n rapids
/ Running ..
```
```{relatedexamples}
```
| 0 |
rapidsai_public_repos/deployment/source/cloud
|
rapidsai_public_repos/deployment/source/cloud/azure/index.md
|
---
html_theme.sidebar_secondary.remove: true
---
# Microsoft Azure
```{include} ../../_includes/menus/azure.md
```
RAPIDS can be deployed on Microsoft Azure in several ways. Azure supports various kinds of GPU VMs for different needs.
For RAPIDS users we recommend NC/ND VMs for computation and deep learning optimized instances.
NC (>=v3) series
| Size | vCPU | Memory: GiB | Temp Storage (with NVMe) : GiB | GPU | GPU Memory: GiB | Max data disks | Max uncached disk throughput: IOPS / MBps | Max NICs/network bandwidth (MBps) |
| ------------------------ | ---- | ----------- | ------------------------------ | --- | --------------- | -------------- | ----------------------------------------- | --------------------------------- |
| Standard_NC24ads_A100_v4 | 24 | 220 | 1123 | 1 | 80 | 12 | 30000/1000 | 2/20,000 |
| Standard_NC48ads_A100_v4 | 48 | 440 | 2246 | 2 | 160 | 24 | 60000/2000 | 4/40,000 |
| Standard_NC96ads_A100_v4 | 96 | 880 | 4492 | 4 | 320 | 32 | 120000/4000 | 8/80,000 |
| Standard_NC4as_T4_v3 | 4 | 28 | 180 | 1 | 16 | 8 | 2 / 8000 |
| Standard_NC8as_T4_v3 | 8 | 56 | 360 | 1 | 16 | 16 | 4 / 8000 |
| Standard_NC16as_T4_v3 | 16 | 110 | 360 | 1 | 16 | 32 | 8 / 8000 |
| Standard_NC64as_T4_v3 | 64 | 440 | 2880 | 4 | 64 | 32 | 8 / 32000 |
| Standard_NC6s_v3 | 6 | 112 | 736 | 1 | 16 | 12 | 20000/200 | 4 |
| Standard_NC12s_v3 | 12 | 224 | 1474 | 2 | 32 | 24 | 40000/400 | 8 |
| Standard_NC24s_v3 | 24 | 448 | 2948 | 4 | 64 | 32 | 80000/800 | 8 |
| Standard_NC24rs_v3\* | 24 | 448 | 2948 | 4 | 64 | 32 | 80000/800 | 8 |
\* RDMA capable
ND (>=v2) series
| Size | vCPU | Memory: GiB | Temp Storage (with NVMe) : GiB | GPU | GPU Memory: GiB | Max data disks | Max uncached disk throughput: IOPS / MBps | Max NICs/network bandwidth (MBps) |
| ------------------------- | ---- | ----------- | ------------------------------ | ------------------------------ | --------------- | -------------- | ----------------------------------------- | --------------------------------- |
| Standard_ND96asr_v4 | 96 | 900 | 6000 | 8 A100 40 GB GPUs (NVLink 3.0) | 40 | 32 | 80,000 / 800 | 8/24,000 |
| Standard_ND96amsr_A100_v4 | 96 | 1900 | 6400 | 8 A100 80 GB GPUs (NVLink 3.0) | 80 | 32 | 80,000 / 800 | 8/24,000 |
| Standard_ND40rs_v2 | 40 | 672 | 2948 | 8 V100 32 GB (NVLink) | 32 | 32 | 80,000 / 800 | 8/24,000 |
## Useful Links
- [GPU VM availability by region](https://azure.microsoft.com/en-us/explore/global-infrastructure/products-by-region/?products=virtual-machines)
- [For GPU VM sizes overview](https://learn.microsoft.com/en-us/azure/virtual-machines/sizes-gpu)
```{toctree}
---
hidden: true
---
azure-vm
aks
azure-vm-multi
azureml
```
| 0 |
rapidsai_public_repos/deployment/source/cloud
|
rapidsai_public_repos/deployment/source/cloud/aws/sagemaker.md
|
# SageMaker
RAPIDS can be used in a few ways with [AWS SageMaker](https://aws.amazon.com/sagemaker/).
## SageMaker Notebooks
[SageMaker Notebook Instances](https://docs.aws.amazon.com/sagemaker/latest/dg/nbi.html) can be augmented with a RAPIDS conda environment.
We can add a RAPIDS conda environment to the set of Jupyter ipython kernels available in our SageMaker notebook instance by installing in a [lifecycle configuration script](https://docs.aws.amazon.com/sagemaker/latest/dg/notebook-lifecycle-config.html).
To get started head to SageMaker and create a [new SageMaker Notebook Instance](https://console.aws.amazon.com/sagemaker/home#/notebook-instances/create).
### Select your instance
Select a [RAPIDS compatible GPU](https://medium.com/dropout-analytics/which-gpus-work-with-rapids-ai-f562ef29c75f) (NVIDIA Pascal or greater with compute capability 6.0+) as the SageMaker Notebook instance type (e.g., `ml.p3.2xlarge`).

### Create a RAPIDS lifecycle configuration
Create a new lifecycle configuration (via the 'Additional Options' dropdown).

Give your configuration a name like `rapids` and paste the following script into the "start notebook" script.
```bash
#!/bin/bash
set -e
sudo -u ec2-user -i <<'EOF'
mamba create -y -n rapids {{ rapids_conda_channels }} {{ rapids_conda_packages }} ipykernel
conda activate rapids
# optionally install AutoGluon for AutoML GPU demo
# python -m pip install --pre autogluon
python -m ipykernel install --user --name rapids
echo "kernel install completed"
EOF
```
Set the volume size to at least `15GB`, to accommodate the conda environment.
Then launch the instance.
### Select the RAPIDS environment
Once your Notebook Instance is `InService` select "Open JupyterLab"
Then in Jupyter select the `rapids` kernel when working with a new notebook.

## SageMaker Estimators
RAPIDS can also be used in [SageMaker Estimators](https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html). Estimators allow you to launch training jobs on ephemeral VMs which SageMaker manages for you. The benefit of this is that your Notebook Instance doesn't need to have a GPU, so you are only charged for GPU instances for the time that your training job is running.
All you’ll need to do is bring in your RAPIDS training script and libraries as a Docker container image and ask Amazon SageMaker to run copies of it in-parallel on a specified number of GPU instances in the cloud. Let’s take a closer look at how this works through a step-by-step approach:
- Training script should accept hyperparameters as command line arguments. Starting with the base RAPIDS container (pulled from [Docker Hub](https://hub.docker.com/u/rapidsai)), use a `Dockerfile` to augment it by copying your training code and set `WORKDIR` path to the code.
- Install [sagemaker-training toolkit](https://github.com/aws/sagemaker-training-toolkit) to make the container compatible with Sagemaker. Add other packages as needed for your workflow needs e.g. python, flask (model serving), dask-ml etc.
- Push the image to a container registry (ECR).

- Having built our container and custom logic, we can now assemble all components into an Estimator. We can now test the Estimator and run parallel hyperparameter optimization tuning jobs.
```python
estimator = sagemaker.estimator.Estimator(
image_uri,
role,
instance_type,
instance_count,
input_mode,
output_path,
use_spot_instances,
max_run=86400,
sagemaker_session,
)
estimator.fit(inputs=s3_data_input, job_name=job_name)
hpo = sagemaker.tuner.HyperparameterTuner(
estimator,
metric_definitions,
objective_metric_name,
objective_type="Maximize",
hyperparameter_ranges,
strategy,
max_jobs,
max_parallel_jobs,
)
hpo.fit(inputs=s3_data_input, job_name=tuning_job_name, wait=True, logs="All")
```
### Upload data to S3
We offer the dataset for this demo in a public bucket hosted in either the [`us-east-1`](https://s3.console.aws.amazon.com/s3/buckets/sagemaker-rapids-hpo-us-east-1/) or [`us-west-2`](https://s3.console.aws.amazon.com/s3/buckets/sagemaker-rapids-hpo-us-west-2/) regions
### Create Notebook Instance
Sign in to the Amazon SageMaker [console](https://console.aws.amazon.com/sagemaker/). Choose Notebook > Notebook Instances > Create notebook instance. If a field is not mentioned below, leave the default values:
- **NOTEBOOK_INSTANCE_NAME** = Name of the notebook instance
- **NOTEBOOK_INSTANCE_TYPE** = Type of notebook instance. We recommend a lightweight instance (e.g., ml.t2.medium) since the instance will only be used to build the container and launch work
- **PLATFORM_IDENTIFIER** = 'Amazon Linux 2, Jupyter Lab 3'
- **IAM_ROLE** = Create a new role > Create role
### Launch Jupyter Notebook
In a few minutes, Amazon SageMaker launches an ML compute instance — when its ready you should see several links appear in the Actions tab of the 'Notebook Instances' section, click on **Open JupyerLab** to launch into the notebook.
```{note}
If you see Pending to the right of the notebook instance in the Status column, your notebook is still being created. The status will change to InService when the notebook is ready for use.
```
### Run the Example Notebook
Once inside JupyterLab you should be able to upload the [Running RAPIDS hyperparameter experiments at scale](/examples/rapids-sagemaker-higgs/notebook) example notebook and continue following those instructions.
## Further reading
We’ve also written a **[detailed blog post](https://medium.com/rapids-ai/running-rapids-experiments-at-scale-using-amazon-sagemaker-d516420f165b)** on how to use SageMaker with RAPIDS.
```{relatedexamples}
```
| 0 |
rapidsai_public_repos/deployment/source/cloud
|
rapidsai_public_repos/deployment/source/cloud/aws/ec2-multi.md
|
# EC2 Cluster (via Dask)
To launch a multi-node cluster on AWS EC2 we recommend you use [Dask Cloud Provider](https://cloudprovider.dask.org/en/latest/), a native cloud integration for Dask. It helps manage Dask clusters on different cloud platforms.
## Local Environment Setup
Before running these instructions, ensure you have installed RAPIDS.
```{note}
This method of deploying RAPIDS effectively allows you to burst beyond the node you are on into a cluster of EC2 VMs. This does come with the caveat that you are on a RAPIDS capable environment with GPUs.
```
If you are using a machine with an NVIDIA GPU then follow the [local install instructions](https://docs.rapids.ai/install). Alternatively if you do not have a GPU locally consider using a remote environment like a [SageMaker Notebook Instance](https://docs.aws.amazon.com/sagemaker/latest/dg/nbi.html).
### Install the AWS CLI
Install the AWS CLI tools following the [official instructions](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).
### Install Dask Cloud Provider
Also install `dask-cloudprovider` and ensure you select the `aws` optional extras.
```console
$ pip install "dask-cloudprovider[aws]"
```
## Cluster setup
We'll now setup the [EC2Cluster](https://cloudprovider.dask.org/en/latest/aws.html#elastic-compute-cloud-ec2) from Dask Cloud Provider.
To do this, you'll first need to run `aws configure` and ensure the credentials are updated. [Learn more about the setup](https://cloudprovider.dask.org/en/latest/aws.html#authentication). The API also expects a security group that allows access to ports 8786-8787 and all traffic between instances in the security group. If you do not pass a group here, `dask-cloudprovider` will create one for you.
```python
from dask_cloudprovider.aws import EC2Cluster
cluster = EC2Cluster(
instance_type="g4dn.12xlarge", # 4 T4 GPUs
docker_image="{{ rapids_container }}",
worker_class="dask_cuda.CUDAWorker",
worker_options={"rmm-managed-memory": True},
security_groups=["<SECURITY GROUP ID>"],
docker_args="--shm-size=256m",
n_workers=3,
security=False,
availability_zone="us-east-1a",
region="us-east-1",
)
```
```{warning}
Instantiating this class can take upwards of 30 minutes. See the [Dask docs](https://cloudprovider.dask.org/en/latest/packer.html) on prebuilding AMIs to speed this up.
```
````{dropdown} If you have non-default credentials you may need to pass your credentials manually.
:color: info
:icon: info
Here's a small utility for parsing credential profiles.
```python
import os
import configparser
import contextlib
def get_aws_credentials(*, aws_profile="default"):
parser = configparser.RawConfigParser()
parser.read(os.path.expanduser("~/.aws/config"))
config = parser.items(
f"profile {aws_profile}" if aws_profile != "default" else "default"
)
parser.read(os.path.expanduser("~/.aws/credentials"))
credentials = parser.items(aws_profile)
all_credentials = {key.upper(): value for key, value in [*config, *credentials]}
with contextlib.suppress(KeyError):
all_credentials["AWS_REGION"] = all_credentials.pop("REGION")
return all_credentials
```
```python
cluster = EC2Cluster(..., env_vars=get_aws_credentials(aws_profile="foo"))
```
````
## Connecting a client
Once your cluster has started you can connect a Dask client to submit work.
```python
from dask.distributed import Client
client = Client(cluster)
```
```python
import cudf
import dask_cudf
df = dask_cudf.from_cudf(cudf.datasets.timeseries(), npartitions=2)
df.x.mean().compute()
```
## Clean up
When you create your cluster Dask Cloud Provider will register a finalizer to shutdown the cluster. So when your Python process exits the cluster will be cleaned up.
You can also explicitly shutdown the cluster with:
```python
client.close()
cluster.close()
```
```{relatedexamples}
```
| 0 |
rapidsai_public_repos/deployment/source/cloud
|
rapidsai_public_repos/deployment/source/cloud/aws/ecs.md
|
# Elastic Container Service (ECS)
RAPIDS can be deployed on a multi-node ECS cluster using Dask’s dask-cloudprovider management tools. For more details, see our **[blog post on
deploying on ECS.](https://medium.com/rapids-ai/getting-started-with-rapids-on-aws-ecs-using-dask-cloud-provider-b1adfdbc9c6e)**
## Run from within AWS
The following steps assume you are running from within the same AWS VPC. One way to ensure this is to use
[AWS EC2 Single Instance](https://docs.rapids.ai/deployment/stable/cloud/aws/ec2.html) as your development environment.
### Setup AWS credentials
First, you will need AWS credentials to interact with the AWS CLI. If someone else manages your AWS account, you will need to
get these keys from them. <br />
You can provide these credentials to dask-cloudprovider in a number of ways, but the easiest is to setup your
local environment using the AWS command line tools:
```shell
$ pip install awscli
$ aws configure
```
### Install dask-cloudprovider
To install, you will need to run the following:
```shell
$ pip install dask-cloudprovider[aws]
```
## Create an ECS cluster
In the AWS console, visit the ECS dashboard and on the left-hand side, click “Clusters” then **Create Cluster**
Give the cluster a name e.g.`rapids-cluster`
For Networking, select the default VPC and all the subnets available in that VPC
Select "Amazon EC2 instances" for the Infrastructure type and configure your settings:
- Operating system: must be Linux-based architecture
- EC2 instance type: must support RAPIDS-compatible GPUs (Pascal or greater), e.g `p3.2xlarge`
- Desired capacity: number of maximum instances to launch (default maximum 5)
- SSH Key pair
Review your settings then click on the "Create" button and wait for the cluster creation to complete.
## Create a Dask cluster
Get the Amazon Resource Name (ARN) for the cluster you just created.
Set `AWS_REGION` environment variable to your **[default region](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-regions)**, for instance `us-east-1`
```shell
AWS_REGION=[REGION]
```
Create the ECSCluster object in your Python session:
```python
from dask_cloudprovider.aws import ECSCluster
cluster = ECSCluster(
cluster_arn= "<cluster arn>",
n_workers=<num_workers>,
worker_gpu=<num_gpus>,
skip_cleaup=True,
execution_role_arn=" <execution_role_arn>",
task_role_arn= "<task_role_arn>",
scheduler_timeout="20 minutes",
)
```
````{note}
When you call this command for the first time, `ECSCluster()` will automatically create a **security group** with the same name as the ECS cluster you created above..
However, if the Dask cluster creation fails or you'd like to reuse the same ECS cluster for subsequent runs of `ECSCluster()`, then you will need to provide this security group value.
```shell
security_groups=["sg-0fde781be42651"]
````
[**cluster_arn**] = ARN of an existing ECS cluster to use for launching tasks <br />
[**num_workers**] = number of workers to start on cluster creation <br />
[**num_gpus**] = number of GPUs to expose to the worker, this must be less than or equal to the number of GPUs in the instance type you selected for the ECS cluster (e.g `1` for `p3.2xlarge`).<br />
[**skip_cleanup**] = if True, Dask workers won't be automatically terminated when cluster is shut down <br />
[**execution_role_arn**] = ARN of the IAM role that allows the Dask cluster to create and manage ECS resources <br />
[**task_role_arn**] = ARN of the IAM role that the Dask workers assume when they run <br />
[**scheduler_timeout**] = maximum time scheduler will wait for workers to connect to the cluster
## Test RAPIDS
Create a distributed client for our cluster:
```python
from dask.distributed import Client
client = Client(cluster)
```
Load sample data and test the cluster!
```python
import dask, cudf, dask_cudf
ddf = dask.datasets.timeseries()
gdf = ddf.map_partitions(cudf.from_pandas)
gdf.groupby("name").id.count().compute().head()
```
```shell
Out[34]:
Xavier 99495
Oliver 100251
Charlie 99354
Zelda 99709
Alice 100106
Name: id, dtype: int64
```
## Cleanup
You can scale down or delete the Dask cluster, but the ECS cluster will continue to run (and incur charges!) until you also scale it down or shut down altogether. <br />
If you are planning to use the ECS cluster again soon, it is probably preferable to reduce the nodes to zero.
```{relatedexamples}
```
| 0 |
rapidsai_public_repos/deployment/source/cloud
|
rapidsai_public_repos/deployment/source/cloud/aws/ec2.md
|
# Elastic Compute Cloud (EC2)
## Create Instance
Create a new [EC2 Instance](https://aws.amazon.com/ec2/) with GPUs, the [NVIDIA Driver](https://www.nvidia.co.uk/Download/index.aspx) and the [NVIDIA Container Runtime](https://developer.nvidia.com/nvidia-container-runtime).
NVIDIA maintains an [Amazon Machine Image (AMI) that pre-installs NVIDIA drivers and container runtimes](https://aws.amazon.com/marketplace/pp/prodview-7ikjtg3um26wq), we recommend using this image as the starting point.
1. Open the [**EC2 Dashboard**](https://console.aws.amazon.com/ec2/home).
1. Select **Launch Instance**.
1. In the AMI selection box search for "nvidia", then switch to the **AWS Marketplace AMIs** tab.
1. Select **NVIDIA GPU-Optimized VMI**, then select **Select** and then **Continue**.
1. In **Key pair** select your SSH keys (create these first if you haven't already).
1. Under network settings create a security group (or choose an existing) that allows SSH access on port `22` and also allow ports `8888,8786,8787` to access Jupyter and Dask.
1. Select **Launch**.
## Connect to the instance
Next we need to connect to the instance.
1. Open the [**EC2 Dashboard**](https://console.aws.amazon.com/ec2/home).
2. Locate your VM and note the **Public IP Address**.
3. In your terminal run `ssh ubuntu@<ip address>`
## Install RAPIDS
```{include} ../../_includes/install-rapids-with-docker.md
```
## Test RAPIDS
```{include} ../../_includes/test-rapids-docker-vm.md
```
```{relatedexamples}
```
| 0 |
rapidsai_public_repos/deployment/source/cloud
|
rapidsai_public_repos/deployment/source/cloud/aws/eks.md
|
# AWS Elastic Kubernetes Service (EKS)
RAPIDS can be deployed on AWS via the [Elastic Kubernetes Service](https://aws.amazon.com/eks/) (EKS).
To run RAPIDS you'll need a Kubernetes cluster with GPUs available.
## Prerequisites
First you'll need to have the [`aws` CLI tool](https://aws.amazon.com/cli/) and [`eksctl` CLI tool](https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html) installed along with [`kubectl`](https://kubernetes.io/docs/tasks/tools/), [`helm`](https://helm.sh/docs/intro/install/), etc for managing Kubernetes.
Ensure you are logged into the `aws` CLI.
```console
$ aws configure
```
## Create the Kubernetes cluster
Now we can launch a GPU enabled EKS cluster. First launch an EKS cluster with `eksctl`.
```console
$ eksctl create cluster rapids \
--version 1.24 \
--nodes 3 \
--node-type=p3.8xlarge \
--timeout=40m \
--ssh-access \
--ssh-public-key <public key ID> \ # Be sure to set your public key ID here
--region us-east-1 \
--zones=us-east-1c,us-east-1b,us-east-1d \
--auto-kubeconfig \
--install-nvidia-plugin=false
```
With this command, you’ve launched an EKS cluster called `rapids`. You’ve specified that it should use nodes of type `p3.8xlarge`. We also specified that we don't want to install the NVIDIA drivers as we will do that with the NVIDIA operator.
To access the cluster we need to pull down the credentials.
```console
$ aws eks --region us-east-1 update-kubeconfig --name rapids
```
## Install drivers
Next, [install the NVIDIA drivers](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/getting-started.html) onto each node.
```console
$ helm install --repo https://helm.ngc.nvidia.com/nvidia --wait --generate-name -n gpu-operator --create-namespace gpu-operator
NAME: gpu-operator-1670843572
NAMESPACE: gpu-operator
STATUS: deployed
REVISION: 1
TEST SUITE: None
```
Verify that the NVIDIA drivers are successfully installed.
```console
$ kubectl get po -A --watch | grep nvidia
kube-system nvidia-driver-installer-6zwcn 1/1 Running 0 8m47s
kube-system nvidia-driver-installer-8zmmn 1/1 Running 0 8m47s
kube-system nvidia-driver-installer-mjkb8 1/1 Running 0 8m47s
kube-system nvidia-gpu-device-plugin-5ffkm 1/1 Running 0 13m
kube-system nvidia-gpu-device-plugin-d599s 1/1 Running 0 13m
kube-system nvidia-gpu-device-plugin-jrgjh 1/1 Running 0 13m
```
After your drivers are installed, you are ready to test your cluster.
```{include} ../../_includes/check-gpu-pod-works.md
```
## Install RAPIDS
Now that you have a GPU enabled Kubernetes cluster on EKS you can install RAPIDS with [any of the supported methods](../../platforms/kubernetes).
## Clean up
You can also delete the EKS cluster to stop billing with the following command.
```console
$ eksctl delete cluster --region=us-east-1 --name=rapids
Deleting cluster rapids...⠼
```
```{relatedexamples}
```
| 0 |
rapidsai_public_repos/deployment/source/cloud
|
rapidsai_public_repos/deployment/source/cloud/aws/index.md
|
---
html_theme.sidebar_secondary.remove: true
---
# Amazon Web Services
```{include} ../../_includes/menus/aws.md
```
RAPIDS can be deployed on Amazon Web Services (AWS) in several ways. See the
list of accelerated instance types below:
| Cloud <br> Provider | Inst. <br> Type | Inst. <br> Name | GPU <br> Count | GPU <br> Type | xGPU <br> RAM | xGPU <br> RAM Total |
| :------------------ | --------------- | --------------- | -------------- | ------------- | ------------- | ------------------: |
| AWS | G4dn | g4dn\.xlarge | 1 | T4 | 16 (GB) | 16 (GB) |
| AWS | G4dn | g4dn\.12xlarge | 4 | T4 | 16 (GB) | 64 (GB) |
| AWS | G4dn | g4dn\.metal | 8 | T4 | 16 (GB) | 128 (GB) |
| AWS | P3 | p3\.2xlarge | 1 | V100 | 16 (GB) | 16 (GB) |
| AWS | P3 | p3\.8xlarge | 4 | V100 | 16 (GB) | 64 (GB) |
| AWS | P3 | p3\.16xlarge | 8 | V100 | 16 (GB) | 128 (GB) |
| AWS | P3 | p3dn\.24xlarge | 8 | V100 | 32 (GB) | 256 (GB) |
| AWS | P4 | p4d\.24xlarge | 8 | A100 | 40 (GB) | 320 (GB) |
| AWS | G5 | g5\.xlarge | 1 | A10G | 24 (GB) | 24 (GB) |
| AWS | G5 | g5\.2xlarge | 1 | A10G | 24 (GB) | 24 (GB) |
| AWS | G5 | g5\.4xlarge | 1 | A10G | 24 (GB) | 24 (GB) |
| AWS | G5 | g5\.8xlarge | 1 | A10G | 24 (GB) | 24 (GB) |
| AWS | G5 | g5\.16xlarge | 1 | A10G | 24 (GB) | 24 (GB) |
| AWS | G5 | g5\.12xlarge | 4 | A10G | 24 (GB) | 96 (GB) |
| AWS | G5 | g5\.24xlarge | 4 | A10G | 24 (GB) | 96 (GB) |
| AWS | G5 | g5\.48xlarge | 8 | A10G | 24 (GB) | 192 (GB) |
```{toctree}
---
hidden: true
---
ec2
ec2-multi
eks
ecs
sagemaker
```
| 0 |
rapidsai_public_repos/deployment/source/cloud
|
rapidsai_public_repos/deployment/source/cloud/nvidia/bcp.md
|
# Base Command Platform (BCP)
[NVIDIA Base Command™ Platform (BCP)](https://www.nvidia.com/en-gb/data-center/base-command-platform/) is a software service in [NVIDIA DGX Cloud](https://www.nvidia.com/en-us/data-center/dgx-cloud/) for enterprise-class AI training that enables businesses and their data scientists to accelerate AI development.
In addition to launching batch training jobs BCP also allows you to quickly launch RAPIDS clusters running Jupyter and Dask.
## Prerequisites
To get started sign up for a [DGX Cloud account](https://www.nvidia.com/en-us/data-center/dgx-cloud/trial/).
## Launch a cluster
Login to [NVIDIA NGC](https://www.nvidia.com/en-gb/gpu-cloud/) and navigate to the [BCP service](https://ngc.nvidia.com/projects).
Select "Create Custom Project" (alternatively select a quick start project then select the three dots in the top-right to open a menu and select edit project).

Next give your project a title and description and choose an _Accelerated Computing Environment (ACE)_ and an _Instance Type_.

Scroll down and select your mount points:
- **Datasets** are existing data you have stored in NGC which you would like to mount to your cluster.
- **Workspaces** are persistent storage where you may wish to keep notebooks and other working data.
- **Results** is a required mount point where you will write your output results (if any).

Scroll down and select the latest RAPIDS container image.

```{note}
Optionally configure your desired number of Dask workers, session wall time and any additional packages you wish to install at runtime.
```
When you are ready hit **Create**.
## Accessing Jupyter
Once you create your cluster it will go into a _Queued_ or _Starting_ state. Behind the scenes NVIDIA DGX Cloud will provision infrastructure either in NVIDIA data centers or in the cloud depending on your ACE selection.

Once your cluster is running you will see various URLs to choose from:
- **Jupyter** launches [Jupyter Lab](https://jupyter.org/) running on the scheduler node.
- **Dask Dashboard** opens the [Dask Dashboard](https://docs.dask.org/en/stable/dashboard.html) in a new tab.
- **Scheduler URL** can be used to connect a Dask client from outside of BCP.

Let's click the **Jupyter** link and use RAPIDS from a notebook environment.

## Connecting a Dask client
The Dask scheduler is running on the same node as Jupyter so you can connect a `dask.distributed.Client` to `ws://localhost:8786`.
```{warning}
BCP configures Dask to use websockets instead of the default TCP to allow for proxying ourside of NGC. So all connection urls will start with `ws://`.
```

```{note}
You can also copy/paste the Dask dashboard URL we saw earlier into the Dask Jupyter extension to view cluster metrics right within Jupyter.
```
## Testing RAPIDS
Now that we're all set up with a Jupyter session and have connected to our Dask cluster we can import `cudf` to verify we can use RAPIDS.

For more detailed examples see our related notebooks.
```{relatedexamples}
```
| 0 |
rapidsai_public_repos/deployment/source/cloud
|
rapidsai_public_repos/deployment/source/cloud/nvidia/index.md
|
---
html_theme.sidebar_secondary.remove: true
---
# NVIDIA DGX Cloud
```{include} ../../_includes/menus/nvidia.md
```
```{toctree}
---
hidden: true
---
bcp
```
| 0 |
rapidsai_public_repos/deployment/source
|
rapidsai_public_repos/deployment/source/examples/index.md
|
---
html_theme.sidebar_secondary.remove: true
---
# Workflow Examples
```{notebookgallerytoctree}
xgboost-gpu-hpo-job-parallel-ngc/notebook
xgboost-gpu-hpo-job-parallel-k8s/notebook
rapids-optuna-hpo/notebook
rapids-sagemaker-higgs/notebook
rapids-sagemaker-hpo/notebook
rapids-ec2-mnmg/notebook
rapids-autoscaling-multi-tenant-kubernetes/notebook
xgboost-randomforest-gpu-hpo-dask/notebook
rapids-azureml-hpo/notebook
time-series-forecasting-with-hpo/notebook
xgboost-rf-gpu-cpu-benchmark/notebook
xgboost-azure-mnmg-daskcloudprovider/notebook
```
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/rapids-ec2-mnmg/notebook.ipynb
|
from dask.distributed import Client
client = Client(cluster)
clientimport math
from datetime import datetime
import cudf
import dask
import dask_cudf
import numpy as np
from cuml.dask.common import utils as dask_utils
from cuml.dask.ensemble import RandomForestRegressor
from cuml.metrics import mean_squared_error
from dask_ml.model_selection import train_test_split
from dateutil import parser
import configparser, os, contextlib# create a list of all columns & dtypes the df must have for reading
col_dtype = {
"VendorID": "int32",
"tpep_pickup_datetime": "datetime64[ms]",
"tpep_dropoff_datetime": "datetime64[ms]",
"passenger_count": "int32",
"trip_distance": "float32",
"pickup_longitude": "float32",
"pickup_latitude": "float32",
"RatecodeID": "int32",
"store_and_fwd_flag": "int32",
"dropoff_longitude": "float32",
"dropoff_latitude": "float32",
"payment_type": "int32",
"fare_amount": "float32",
"extra": "float32",
"mta_tax": "float32",
"tip_amount": "float32",
"total_amount": "float32",
"tolls_amount": "float32",
"improvement_surcharge": "float32",
}taxi_df = dask_cudf.read_csv(
"https://storage.googleapis.com/anaconda-public-data/nyc-taxi/csv/2016/yellow_tripdata_2016-02.csv",
dtype=col_dtype,
)# Dictionary of required columns and their datatypes
must_haves = {
"pickup_datetime": "datetime64[ms]",
"dropoff_datetime": "datetime64[ms]",
"passenger_count": "int32",
"trip_distance": "float32",
"pickup_longitude": "float32",
"pickup_latitude": "float32",
"rate_code": "int32",
"dropoff_longitude": "float32",
"dropoff_latitude": "float32",
"fare_amount": "float32",
}def clean(ddf, must_haves):
# replace the extraneous spaces in column names and lower the font type
tmp = {col: col.strip().lower() for col in list(ddf.columns)}
ddf = ddf.rename(columns=tmp)
ddf = ddf.rename(
columns={
"tpep_pickup_datetime": "pickup_datetime",
"tpep_dropoff_datetime": "dropoff_datetime",
"ratecodeid": "rate_code",
}
)
ddf["pickup_datetime"] = ddf["pickup_datetime"].astype("datetime64[ms]")
ddf["dropoff_datetime"] = ddf["dropoff_datetime"].astype("datetime64[ms]")
for col in ddf.columns:
if col not in must_haves:
ddf = ddf.drop(columns=col)
continue
if ddf[col].dtype == "object":
# Fixing error: could not convert arg to str
ddf = ddf.drop(columns=col)
else:
# downcast from 64bit to 32bit types
# Tesla T4 are faster on 32bit ops
if "int" in str(ddf[col].dtype):
ddf[col] = ddf[col].astype("int32")
if "float" in str(ddf[col].dtype):
ddf[col] = ddf[col].astype("float32")
ddf[col] = ddf[col].fillna(-1)
return ddftaxi_df = taxi_df.map_partitions(clean, must_haves, meta=must_haves)## add features
taxi_df["hour"] = taxi_df["pickup_datetime"].dt.hour.astype("int32")
taxi_df["year"] = taxi_df["pickup_datetime"].dt.year.astype("int32")
taxi_df["month"] = taxi_df["pickup_datetime"].dt.month.astype("int32")
taxi_df["day"] = taxi_df["pickup_datetime"].dt.day.astype("int32")
taxi_df["day_of_week"] = taxi_df["pickup_datetime"].dt.weekday.astype("int32")
taxi_df["is_weekend"] = (taxi_df["day_of_week"] >= 5).astype("int32")
# calculate the time difference between dropoff and pickup.
taxi_df["diff"] = taxi_df["dropoff_datetime"].astype("int32") - taxi_df[
"pickup_datetime"
].astype("int32")
taxi_df["diff"] = (taxi_df["diff"] / 1000).astype("int32")
taxi_df["pickup_latitude_r"] = taxi_df["pickup_latitude"] // 0.01 * 0.01
taxi_df["pickup_longitude_r"] = taxi_df["pickup_longitude"] // 0.01 * 0.01
taxi_df["dropoff_latitude_r"] = taxi_df["dropoff_latitude"] // 0.01 * 0.01
taxi_df["dropoff_longitude_r"] = taxi_df["dropoff_longitude"] // 0.01 * 0.01
taxi_df = taxi_df.drop("pickup_datetime", axis=1)
taxi_df = taxi_df.drop("dropoff_datetime", axis=1)
def haversine_dist(df):
import cuspatial
h_distance = cuspatial.haversine_distance(
df["pickup_longitude"],
df["pickup_latitude"],
df["dropoff_longitude"],
df["dropoff_latitude"],
)
df["h_distance"] = h_distance
df["h_distance"] = df["h_distance"].astype("float32")
return df
taxi_df = taxi_df.map_partitions(haversine_dist)# Split into training and validation sets
X, y = taxi_df.drop(["fare_amount"], axis=1).astype("float32"), taxi_df[
"fare_amount"
].astype("float32")
X_train, X_test, y_train, y_test = train_test_split(X, y, shuffle=True)workers = client.has_what().keys()
X_train, X_test, y_train, y_test = dask_utils.persist_across_workers(
client, [X_train, X_test, y_train, y_test], workers=workers
)# create cuml.dask RF regressor
cu_dask_rf = RandomForestRegressor(ignore_empty_partitions=True)# fit RF model
cu_dask_rf = cu_dask_rf.fit(X_train, y_train)# predict on validation set
y_pred = cu_dask_rf.predict(X_test)# compute RMSE
score = mean_squared_error(y_pred.compute().to_array(), y_test.compute().to_array())
print("Workflow Complete - RMSE: ", np.sqrt(score))# Clean up resources
client.close()
cluster.close()
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/xgboost-gpu-hpo-job-parallel-k8s/notebook.ipynb
|
# Choose the same RAPIDS image you used for launching the notebook session
rapids_image = "{{ rapids_container }}"
# Use the number of worker nodes in your Kubernetes cluster.
n_workers = 4from dask_kubernetes.operator import KubeCluster
cluster = KubeCluster(
name="rapids-dask",
image=rapids_image,
worker_command="dask-cuda-worker",
n_workers=n_workers,
resources={"limits": {"nvidia.com/gpu": "1"}},
env={"EXTRA_PIP_PACKAGES": "optuna"},
)clusterfrom dask.distributed import Client
client = Client(cluster)def objective(trial):
x = trial.suggest_uniform("x", -10, 10)
return (x - 2) ** 2import optuna
from dask.distributed import wait
# Number of hyperparameter combinations to try in parallel
n_trials = 100
# Optimize in parallel on your Dask cluster
backend_storage = optuna.storages.InMemoryStorage()
dask_storage = optuna.integration.DaskStorage(storage=backend_storage, client=client)
study = optuna.create_study(direction="minimize", storage=dask_storage)
futures = []
for i in range(0, n_trials, n_workers * 4):
iter_range = (i, min([i + n_workers * 4, n_trials]))
futures.append(
{
"range": iter_range,
"futures": [
client.submit(study.optimize, objective, n_trials=1, pure=False)
for _ in range(*iter_range)
],
}
)
for partition in futures:
iter_range = partition["range"]
print(f"Testing hyperparameter combinations {iter_range[0]}..{iter_range[1]}")
_ = wait(partition["futures"])study.best_paramsstudy.best_valuefrom sklearn.datasets import load_breast_cancer
from sklearn.model_selection import cross_val_score, KFold
import xgboost as xgb
from optuna.samplers import RandomSampler
def objective(trial):
X, y = load_breast_cancer(return_X_y=True)
params = {
"n_estimators": 10,
"verbosity": 0,
"tree_method": "gpu_hist",
# L2 regularization weight.
"lambda": trial.suggest_float("lambda", 1e-8, 100.0, log=True),
# L1 regularization weight.
"alpha": trial.suggest_float("alpha", 1e-8, 100.0, log=True),
# sampling according to each tree.
"colsample_bytree": trial.suggest_float("colsample_bytree", 0.2, 1.0),
"max_depth": trial.suggest_int("max_depth", 2, 10, step=1),
# minimum child weight, larger the term more conservative the tree.
"min_child_weight": trial.suggest_float(
"min_child_weight", 1e-8, 100, log=True
),
"learning_rate": trial.suggest_float("learning_rate", 1e-8, 1.0, log=True),
# defines how selective algorithm is.
"gamma": trial.suggest_float("gamma", 1e-8, 1.0, log=True),
"grow_policy": "depthwise",
"eval_metric": "logloss",
}
clf = xgb.XGBClassifier(**params)
fold = KFold(n_splits=5, shuffle=True, random_state=0)
score = cross_val_score(clf, X, y, cv=fold, scoring="neg_log_loss")
return score.mean()# Number of hyperparameter combinations to try in parallel
n_trials = 250
# Optimize in parallel on your Dask cluster
backend_storage = optuna.storages.InMemoryStorage()
dask_storage = optuna.integration.DaskStorage(storage=backend_storage, client=client)
study = optuna.create_study(
direction="maximize", sampler=RandomSampler(seed=0), storage=dask_storage
)
futures = []
for i in range(0, n_trials, n_workers * 4):
iter_range = (i, min([i + n_workers * 4, n_trials]))
futures.append(
{
"range": iter_range,
"futures": [
client.submit(study.optimize, objective, n_trials=1, pure=False)
for _ in range(*iter_range)
],
}
)
for partition in futures:
iter_range = partition["range"]
print(f"Testing hyperparameter combinations {iter_range[0]}..{iter_range[1]}")
_ = wait(partition["futures"])study.best_paramsstudy.best_valuefrom optuna.visualization.matplotlib import (
plot_optimization_history,
plot_param_importances,
)plot_optimization_history(study)plot_param_importances(study)
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/xgboost-azure-mnmg-daskcloudprovider/notebook.ipynb
|
# # Uncomment the following and install some libraries at the beginning.
# If adlfs is not present, install adlfs to read from Azure data lake.
! pip install adlfs
! pip install "dask-cloudprovider[azure]" --upgradefrom dask.distributed import Client, wait, get_worker
from dask_cloudprovider.azure import AzureVMCluster
import dask_cudf
from dask_ml.model_selection import train_test_split
from cuml.dask.common import utils as dask_utils
from cuml.metrics import mean_squared_error
from cuml import ForestInference
import cudf
import xgboost as xgb
from datetime import datetime
from dateutil import parser
import numpy as np
from timeit import default_timer as timer
import dask
import jsonlocation = "West US 2"
resource_group = "rapidsai-deployment"
vnet = "rapidsai-deployment-vnet"
security_group = "rapidsaiclouddeploymenttest-nsg"
vm_size = "Standard_NC12s_v3" # or choose a different GPU enabled VM type
docker_image = "{{rapids_container}}"
docker_args = "--shm-size=256m"
worker_class = "dask_cuda.CUDAWorker"dask.config.set(
{
"logging.distributed": "info",
"cloudprovider.azure.azurevm.marketplace_plan": {
"publisher": "nvidia",
"name": "ngc-base-version-23_03_0",
"product": "ngc_azure_17_11",
"version": "23.03.0",
},
}
)
vm_image = ""
config = dask.config.get("cloudprovider.azure.azurevm", {})
configpacker_config = {
"builders": [{
"type": "azure-arm",
"use_azure_cli_auth": True,
"managed_image_resource_group_name": resource_group,
"managed_image_name": <the name of the customized VM image>,
"custom_data_file": "./configs/cloud_init.yaml.j2",
"os_type": "Linux",
"image_publisher": "Canonical",
"image_offer": "UbuntuServer",
"image_sku": "18.04-LTS",
"azure_tags": {
"dept": "RAPIDS-CSP",
"task": "RAPIDS Custom Image deployment"
},
"build_resource_group_name": resource_group,
"vm_size": vm_size
}],
"provisioners": [{
"inline": [
"echo 'Waiting for cloud-init'; while [ ! -f /var/lib/cloud/instance/boot-finished ]; do sleep 1; done; echo 'Done'",
],
"type": "shell"
}]
}
with open("packer_config.json", "w") as fh:
fh.write(json.dumps(packer_config))
# # Uncomment the following line and run to create the custom image
# ! packer build packer_config.jsonManagedImageId = <value from the output above> # or the customized VM id if you already have resource id of the customized VM from a previous run.
dask.config.set({"cloudprovider.azure.azurevm.vm_image": {}})
config = dask.config.get("cloudprovider.azure.azurevm", {})
print(config)
vm_image = {"id": ManagedImageId}
print(vm_image)%%time
cluster = AzureVMCluster(
location=location,
resource_group=resource_group,
vnet=vnet,
security_group=security_group,
vm_image=vm_image,
vm_size=vm_size,
disk_size=200,
docker_image=docker_image,
worker_class=worker_class,
n_workers=2,
security=True,
docker_args=docker_args,
debug=False,
bootstrap=False, # This is to prevent the cloud init jinja2 script from running in the custom VM.
)client = Client(cluster)
client%%time
client.wait_for_workers(2)# Uncomment if you only have the scheduler with n_workers=0 and want to scale the workers separately.
# %%time
# client.cluster.scale(n_workers)import pprint
pp = pprint.PrettyPrinter()
pp.pprint(
client.scheduler_info()
) # will show some information of the GPUs of the workersfrom dask.distributed import PipInstall
client.register_worker_plugin(PipInstall(packages=["adlfs"]))import math
from math import cos, sin, asin, sqrt, pi
def haversine_distance_kernel(
pickup_latitude_r,
pickup_longitude_r,
dropoff_latitude_r,
dropoff_longitude_r,
h_distance,
radius,
):
for i, (x_1, y_1, x_2, y_2) in enumerate(
zip(
pickup_latitude_r,
pickup_longitude_r,
dropoff_latitude_r,
dropoff_longitude_r,
)
):
x_1 = pi / 180 * x_1
y_1 = pi / 180 * y_1
x_2 = pi / 180 * x_2
y_2 = pi / 180 * y_2
dlon = y_2 - y_1
dlat = x_2 - x_1
a = sin(dlat / 2) ** 2 + cos(x_1) * cos(x_2) * sin(dlon / 2) ** 2
c = 2 * asin(sqrt(a))
# radius = 6371 # Radius of earth in kilometers # currently passed as input arguments
h_distance[i] = c * radius
def day_of_the_week_kernel(day, month, year, day_of_week):
for i, (d_1, m_1, y_1) in enumerate(zip(day, month, year)):
if month[i] < 3:
shift = month[i]
else:
shift = 0
Y = year[i] - (month[i] < 3)
y = Y - 2000
c = 20
d = day[i]
m = month[i] + shift + 1
day_of_week[i] = (d + math.floor(m * 2.6) + y + (y // 4) + (c // 4) - 2 * c) % 7
def add_features(df):
df["hour"] = df["tpepPickupDateTime"].dt.hour
df["year"] = df["tpepPickupDateTime"].dt.year
df["month"] = df["tpepPickupDateTime"].dt.month
df["day"] = df["tpepPickupDateTime"].dt.day
df["diff"] = (
df["tpepDropoffDateTime"] - df["tpepPickupDateTime"]
).dt.seconds # convert difference between pickup and dropoff into seconds
df["pickup_latitude_r"] = df["startLat"] // 0.01 * 0.01
df["pickup_longitude_r"] = df["startLon"] // 0.01 * 0.01
df["dropoff_latitude_r"] = df["endLat"] // 0.01 * 0.01
df["dropoff_longitude_r"] = df["endLon"] // 0.01 * 0.01
df = df.drop("tpepDropoffDateTime", axis=1)
df = df.drop("tpepPickupDateTime", axis=1)
df = df.apply_rows(
haversine_distance_kernel,
incols=[
"pickup_latitude_r",
"pickup_longitude_r",
"dropoff_latitude_r",
"dropoff_longitude_r",
],
outcols=dict(h_distance=np.float32),
kwargs=dict(radius=6371),
)
df = df.apply_rows(
day_of_the_week_kernel,
incols=["day", "month", "year"],
outcols=dict(day_of_week=np.float32),
kwargs=dict(),
)
df["is_weekend"] = df["day_of_week"] < 2
return dfdef persist_train_infer_split(
client,
df,
response_dtype,
response_id,
infer_frac=1.0,
random_state=42,
shuffle=True,
):
workers = client.has_what().keys()
X, y = df.drop([response_id], axis=1), df[response_id].astype("float32")
infer_frac = max(0, min(infer_frac, 1.0))
X_train, X_infer, y_train, y_infer = train_test_split(
X, y, shuffle=True, random_state=random_state, test_size=infer_frac
)
with dask.annotate(workers=set(workers)):
X_train, y_train = client.persist(collections=[X_train, y_train])
if infer_frac != 1.0:
with dask.annotate(workers=set(workers)):
X_infer, y_infer = client.persist(collections=[X_infer, y_infer])
wait([X_train, y_train, X_infer, y_infer])
else:
X_infer = X_train
y_infer = y_train
wait([X_train, y_train])
return X_train, y_train, X_infer, y_infer
def clean(df_part, must_haves):
"""
This function performs the various clean up tasks for the data
and returns the cleaned dataframe.
"""
# iterate through columns in this df partition
for col in df_part.columns:
# drop anything not in our expected list
if col not in must_haves:
df_part = df_part.drop(col, axis=1)
continue
# fixes datetime error found by Ty Mckercher and fixed by Paul Mahler
if df_part[col].dtype == "object" and col in [
"tpepPickupDateTime",
"tpepDropoffDateTime",
]:
df_part[col] = df_part[col].astype("datetime64[ms]")
continue
# if column was read as a string, recast as float
if df_part[col].dtype == "object":
df_part[col] = df_part[col].str.fillna("-1")
df_part[col] = df_part[col].astype("float32")
else:
# downcast from 64bit to 32bit types
# Tesla T4 are faster on 32bit ops
if "int" in str(df_part[col].dtype):
df_part[col] = df_part[col].astype("int32")
if "float" in str(df_part[col].dtype):
df_part[col] = df_part[col].astype("float32")
df_part[col] = df_part[col].fillna(-1)
return df_part
def taxi_data_loader(
client,
adlsaccount,
adlspath,
response_dtype=np.float32,
infer_frac=1.0,
random_state=0,
):
# create a list of columns & dtypes the df must have
must_haves = {
"tpepPickupDateTime": "datetime64[ms]",
"tpepDropoffDateTime": "datetime64[ms]",
"passengerCount": "int32",
"tripDistance": "float32",
"startLon": "float32",
"startLat": "float32",
"rateCodeId": "int32",
"endLon": "float32",
"endLat": "float32",
"fareAmount": "float32",
}
workers = client.has_what().keys()
response_id = "fareAmount"
storage_options = {"account_name": adlsaccount}
taxi_data = dask_cudf.read_parquet(
adlspath,
storage_options=storage_options,
chunksize=25e6,
npartitions=len(workers),
)
taxi_data = clean(taxi_data, must_haves)
taxi_data = taxi_data.map_partitions(add_features)
# Drop NaN values and convert to float32
taxi_data = taxi_data.dropna()
fields = [
"passengerCount",
"tripDistance",
"startLon",
"startLat",
"rateCodeId",
"endLon",
"endLat",
"fareAmount",
"diff",
"h_distance",
"day_of_week",
"is_weekend",
]
taxi_data = taxi_data.astype("float32")
taxi_data = taxi_data[fields]
taxi_data = taxi_data.reset_index()
return persist_train_infer_split(
client, taxi_data, response_dtype, response_id, infer_frac, random_state
)tic = timer()
X_train, y_train, X_infer, y_infer = taxi_data_loader(
client,
adlsaccount="azureopendatastorage",
adlspath="az://nyctlc/yellow/puYear=2014/puMonth=1*/*.parquet",
infer_frac=0.1,
random_state=42,
)
toc = timer()
print(f"Wall clock time taken for ETL and persisting : {toc-tic} s")X_train.shape[0].compute()X_train.head()X_inferparams = {
"learning_rate": 0.15,
"max_depth": 8,
"objective": "reg:squarederror",
"subsample": 0.7,
"colsample_bytree": 0.7,
"min_child_weight": 1,
"gamma": 1,
"silent": True,
"verbose_eval": True,
"booster": "gbtree", # 'gblinear' not implemented in dask
"debug_synchronize": True,
"eval_metric": "rmse",
"tree_method": "gpu_hist",
"num_boost_rounds": 100,
}data_train = xgb.dask.DaskDMatrix(client, X_train, y_train)
tic = timer()
xgboost_output = xgb.dask.train(
client, params, data_train, num_boost_round=params["num_boost_rounds"]
)
xgb_gpu_model = xgboost_output["booster"]
toc = timer()
print(f"Wall clock time taken for this cell : {toc-tic} s")xgb_gpu_modelmodel_filename = "trained-model_nyctaxi.xgb"
xgb_gpu_model.save_model(model_filename)_y_test = y_infer.compute()
wait(_y_test)d_test = xgb.dask.DaskDMatrix(client, X_infer)
tic = timer()
y_pred = xgb.dask.predict(client, xgb_gpu_model, d_test)
y_pred = y_pred.compute()
wait(y_pred)
toc = timer()
print(f"Wall clock time taken for xgb.dask.predict : {toc-tic} s")tic = timer()
y_pred = xgb.dask.inplace_predict(client, xgb_gpu_model, X_infer)
y_pred = y_pred.compute()
wait(y_pred)
toc = timer()
print(f"Wall clock time taken for inplace inference : {toc-tic} s")tic = timer()
print("Calculating MSE")
score = mean_squared_error(y_pred, _y_test)
print("Workflow Complete - RMSE: ", np.sqrt(score))
toc = timer()
print(f"Wall clock time taken for this cell : {toc-tic} s")# the code below will read the locally saved xgboost model
# in binary format and write a copy of it to all dask workers
def read_model(path):
"""Read model file into memory."""
with open(path, "rb") as fh:
return fh.read()
def write_model(path, data):
"""Write model file to disk."""
with open(path, "wb") as fh:
fh.write(data)
return path
model_data = read_model("trained-model_nyctaxi.xgb")
# Tell all the workers to write the model to disk
client.run(write_model, "/tmp/model.dat", model_data)
# this code reads the binary file in worker directory
# and loads the model via FIL for prediction
def predict_model(input_df):
import xgboost as xgb
from cuml import ForestInference
# load xgboost model using FIL and make prediction
fm = ForestInference.load("/tmp/model.dat", model_type="xgboost")
print(fm)
pred = fm.predict(input_df)
return predtic = timer()
predictions = X_infer.map_partitions(
predict_model, meta="float"
) # this is like MPI reduce
y_pred = predictions.compute()
wait(y_pred)
toc = timer()
print(f"Wall clock time taken for this cell : {toc-tic} s")rows_csv = X_infer.iloc[:, 0].shape[0].compute()
print(
f"It took {toc-tic} seconds to predict on {rows_csv} rows using FIL distributedly on each worker"
)tic = timer()
score = mean_squared_error(y_pred, _y_test)
toc = timer()
print("Final - RMSE: ", np.sqrt(score))client.close()
cluster.close()
| 0 |
rapidsai_public_repos/deployment/source/examples/xgboost-azure-mnmg-daskcloudprovider
|
rapidsai_public_repos/deployment/source/examples/xgboost-azure-mnmg-daskcloudprovider/configs/cloud_init.yaml.j2
|
#cloud-config
# Bootstrap
packages:
- apt-transport-https
- ca-certificates
- curl
- gnupg-agent
- software-properties-common
- ubuntu-drivers-common
# Enable ipv4 forwarding, required on CIS hardened machines
write_files:
- path: /etc/sysctl.d/enabled_ipv4_forwarding.conf
content: |
net.ipv4.conf.all.forwarding=1
# create the docker group
groups:
- docker
# Add default auto created user to docker group
system_info:
default_user:
groups: [docker]
runcmd:
# Install Docker
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
- add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- apt-get update -y
- apt-get install -y docker-ce docker-ce-cli containerd.io
- systemctl start docker
- systemctl enable docker
# Install NVIDIA driver
- DEBIAN_FRONTEND=noninteractive ubuntu-drivers install
# Install NVIDIA docker
- curl -fsSL https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
- curl -s -L https://nvidia.github.io/nvidia-docker/$(. /etc/os-release;echo $ID$VERSION_ID)/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
- apt-get update -y
- apt-get install -y nvidia-docker2
- systemctl restart docker
# Attempt to run a RAPIDS container to download the container layers and decompress them
- 'docker run --net=host --gpus=all --shm-size=256m rapidsai/rapidsai:cuda11.2-runtime-ubuntu18.04-py3.8 dask-scheduler --version'
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/rapids-autoscaling-multi-tenant-kubernetes/rapids-notebook.yaml
|
# rapids-notebook.yaml (extended)
apiVersion: v1
kind: ServiceAccount
metadata:
name: rapids-dask
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: rapids-dask
rules:
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list"]
- apiGroups: [kubernetes.dask.org]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rapids-dask
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rapids-dask
subjects:
- kind: ServiceAccount
name: rapids-dask
---
apiVersion: v1
kind: ConfigMap
metadata:
name: jupyter-server-proxy-config
data:
jupyter_server_config.py: |
c.ServerProxy.host_allowlist = lambda app, host: True
---
apiVersion: v1
kind: Service
metadata:
name: rapids-notebook
labels:
app: rapids-notebook
spec:
type: ClusterIP
ports:
- port: 8888
name: http
targetPort: notebook
selector:
app: rapids-notebook
---
apiVersion: v1
kind: Pod
metadata:
name: rapids-notebook
labels:
app: rapids-notebook
spec:
serviceAccountName: rapids-dask
securityContext:
fsGroup: 0
containers:
- name: rapids-notebook
image: us-central1-docker.pkg.dev/nv-ai-infra/rapidsai/rapidsai/base:23.08-cuda12.0-py3.10
resources:
limits:
nvidia.com/gpu: 1
ports:
- containerPort: 8888
name: notebook
env:
- name: DASK_DISTRIBUTED__DASHBOARD__LINK
value: "/proxy/{host}:{port}/status"
volumeMounts:
- name: jupyter-server-proxy-config
mountPath: /root/.jupyter/jupyter_server_config.py
subPath: jupyter_server_config.py
volumes:
- name: jupyter-server-proxy-config
configMap:
name: jupyter-server-proxy-config
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/rapids-autoscaling-multi-tenant-kubernetes/notebook.ipynb
|
from dask_kubernetes.operator import KubeCluster
cluster = KubeCluster(
name="rapids-dask-1",
image="rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu22.04-py3.10", # Replace me with your cached image
n_workers=4,
resources={"limits": {"nvidia.com/gpu": "1"}},
env={"EXTRA_PIP_PACKAGES": "gcsfs"},
worker_command="dask-cuda-worker",
)from dask.distributed import Client, wait
client = Client(cluster)
client%%time
import dask.config
import dask.dataframe as dd
dask.config.set({"dataframe.backend": "cudf"})
df = dd.read_parquet(
"gcs://anaconda-public-data/nyc-taxi/2015.parquet",
storage_options={"token": "cloud"},
).persist()
wait(df)
dffrom cuspatial import haversine_distance
def map_haversine(part):
return haversine_distance(
part["pickup_longitude"],
part["pickup_latitude"],
part["dropoff_longitude"],
part["dropoff_latitude"],
)
df["haversine_distance"] = df.map_partitions(map_haversine)%%time
df["haversine_distance"].compute()client.close()
cluster.close()import dask.delayed
@dask.delayed
def run_haversine(*args):
from dask_kubernetes.operator import KubeCluster
from dask.distributed import Client, wait
import uuid
import dask.config
import dask.dataframe as dd
dask.config.set({"dataframe.backend": "cudf"})
def map_haversine(part):
from cuspatial import haversine_distance
return haversine_distance(
part["pickup_longitude"],
part["pickup_latitude"],
part["dropoff_longitude"],
part["dropoff_latitude"],
)
with KubeCluster(
name="rapids-dask-" + uuid.uuid4().hex[:5],
image="rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu22.04-py3.10", # Replace me with your cached image
n_workers=2,
resources={"limits": {"nvidia.com/gpu": "1"}},
env={"EXTRA_PIP_PACKAGES": "gcsfs"},
worker_command="dask-cuda-worker",
resource_timeout=600,
) as cluster:
with Client(cluster) as client:
client.wait_for_workers(2)
df = dd.read_parquet(
"gcs://anaconda-public-data/nyc-taxi/2015.parquet",
storage_options={"token": "cloud"},
)
client.compute(df.map_partitions(map_haversine))%%time
run_haversine().compute()from dask_kubernetes.operator import KubeCluster, make_cluster_spec
cluster_spec = make_cluster_spec(
name="mock-jupyter-cluster",
image="rapidsai/rapidsai-core:23.02-cuda11.8-runtime-ubuntu22.04-py3.10", # Replace me with your cached image
n_workers=1,
resources={"limits": {"nvidia.com/gpu": "1"}, "requests": {"cpu": "50m"}},
env={"EXTRA_PIP_PACKAGES": "gcsfs dask-kubernetes"},
)
cluster_spec["spec"]["worker"]["spec"]["serviceAccountName"] = "rapids-dask"
cluster = KubeCluster(custom_cluster_spec=cluster_spec)
clusterclient = Client(cluster)
client%%time
run_haversine().compute()from random import randrange
def generate_workload(
stages=3, min_width=1, max_width=3, variation=1, input_workload=None
):
graph = [input_workload] if input_workload is not None else [run_haversine()]
last_width = min_width
for stage in range(stages):
width = randrange(
max(min_width, last_width - variation),
min(max_width, last_width + variation) + 1,
)
graph = [run_haversine(*graph) for _ in range(width)]
last_width = width
return run_haversine(*graph)cluster.scale(3) # Let's also bump up our user cluster to show more users logging in.workload = generate_workload(stages=2, max_width=2)
workload.visualize()import datetime%%time
start_time = (datetime.datetime.now() - datetime.timedelta(minutes=15)).strftime(
"%Y-%m-%dT%H:%M:%SZ"
)
try:
# Start with a couple of concurrent workloads
workload = generate_workload(stages=10, max_width=2)
# Then increase demand as more users appear
workload = generate_workload(
stages=5, max_width=5, min_width=3, variation=5, input_workload=workload
)
# Now reduce the workload for a longer period of time, this could be over a lunchbreak or something
workload = generate_workload(stages=30, max_width=2, input_workload=workload)
# Everyone is back from lunch and it hitting the cluster hard
workload = generate_workload(
stages=10, max_width=10, min_width=3, variation=5, input_workload=workload
)
# The after lunch rush is easing
workload = generate_workload(
stages=5, max_width=5, min_width=3, variation=5, input_workload=workload
)
# As we get towards the end of the day demand slows off again
workload = generate_workload(stages=10, max_width=2, input_workload=workload)
workload.compute()
finally:
client.close()
cluster.close()
end_time = (datetime.datetime.now() + datetime.timedelta(minutes=15)).strftime(
"%Y-%m-%dT%H:%M:%SZ"
)from prometheus_pandas import query
p = query.Prometheus("http://kube-prometheus-stack-prometheus.prometheus:9090")pending_pods = p.query_range(
'kube_pod_status_phase{phase="Pending",namespace="default"}',
start_time,
end_time,
"1s",
).sum()from dask.utils import format_timeformat_time(pending_pods.median())format_time(pending_pods.mean())format_time(pending_pods.quantile(0.99))from scipy import stats
stats.percentileofscore(pending_pods, 2.01)stats.percentileofscore(pending_pods, 5.01)stats.percentileofscore(pending_pods, 60.01)ax = pending_pods.hist(bins=range(0, 600, 30))
ax.set_title("Dask Worker Pod wait times")
ax.set_xlabel("Seconds")
ax.set_ylabel("Pods")ax = pending_pods.hist(bins=range(0, 60, 2))
ax.set_title("Dask Worker Pod wait times (First minute)")
ax.set_xlabel("Seconds")
ax.set_ylabel("Pods")running_pods = p.query_range(
'kube_pod_status_phase{phase=~"Running|ContainerCreating",namespace="default"}',
start_time,
end_time,
"1s",
)
running_pods = running_pods[
running_pods.columns.drop(list(running_pods.filter(regex="prepull")))
]
nodes = p.query_range("count(kube_node_info)", start_time, end_time, "1s")
nodes.columns = ["Available GPUs"]
nodes["Available GPUs"] = (
nodes["Available GPUs"] * 2
) # We know our nodes each had 2 GPUs
nodes["Utilized GPUs"] = running_pods.sum(axis=1)nodes.plot()gpu_hours_utilized = nodes["Utilized GPUs"].sum() / 60 / 60
gpu_hours_utilizedgpu_hours_cost = nodes["Available GPUs"].sum() / 60 / 60
gpu_hours_costoverhead = (1 - (gpu_hours_utilized / gpu_hours_cost)) * 100
str(int(overhead)) + "% overhead"
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/rapids-autoscaling-multi-tenant-kubernetes/image-prepuller.yaml
|
# image-prepuller.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: prepull-rapids
spec:
selector:
matchLabels:
name: prepull-rapids
template:
metadata:
labels:
name: prepull-rapids
spec:
initContainers:
- name: prepull-rapids
image: us-central1-docker.pkg.dev/nv-ai-infra/rapidsai/rapidsai/base:23.08-cuda12.0-py3.10
command: ["sh", "-c", "'true'"]
containers:
- name: pause
image: gcr.io/google_containers/pause
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/rapids-autoscaling-multi-tenant-kubernetes/prometheus-stack-values.yaml
|
# prometheus-stack-values.yaml
serviceMonitorSelectorNilUsesHelmValues: false
prometheus:
prometheusSpec:
# Setting this to a high frequency so that we have richer data for analysis later
scrapeInterval: 1s
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/rapids-optuna-hpo/notebook.ipynb
|
## Run this cell to install optuna
# !pip install optunaimport cudf
import cuml
import dask_cudf
import numpy as np
import optuna
import os
import dask
from cuml import LogisticRegression
from cuml.model_selection import train_test_split
from cuml.metrics import log_loss
from dask_cuda import LocalCUDACluster
from dask.distributed import Client, wait, performance_report# This will use all GPUs on the local host by default
cluster = LocalCUDACluster(threads_per_worker=1, ip="", dashboard_address="8081")
c = Client(cluster)
# Query the client for all connected workers
workers = c.has_what().keys()
n_workers = len(workers)
cimport os
file_name = "train.csv"
data_dir = "data/"
INPUT_FILE = os.path.join(data_dir, file_name)import pandas as pd
N_TRIALS = 150
df = cudf.read_csv(INPUT_FILE)
# Drop ID column
df = df.drop("ID", axis=1)
# Drop non-numerical data and fill NaNs before passing to cuML RF
CAT_COLS = list(df.select_dtypes("object").columns)
df = df.drop(CAT_COLS, axis=1)
df = df.fillna(0)
df = df.astype("float32")
X, y = df.drop(["target"], axis=1), df["target"].astype("int32")
study_name = "dask_optuna_lr_log_loss_tpe"def train_and_eval(
X_param, y_param, penalty="l2", C=1.0, l1_ratio=None, fit_intercept=True
):
"""
Splits the given data into train and test split to train and evaluate the model
for the params parameters.
Params
______
X_param: DataFrame.
The data to use for training and testing.
y_param: Series.
The label for training
penalty, C, l1_ratio, fit_intercept: The parameter values for Logistic Regression.
Returns
score: log loss of the fitted model
"""
X_train, X_valid, y_train, y_valid = train_test_split(
X_param, y_param, random_state=42
)
classifier = LogisticRegression(
penalty=penalty,
C=C,
l1_ratio=l1_ratio,
fit_intercept=fit_intercept,
max_iter=10000,
)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_valid)
score = log_loss(y_valid, y_pred)
return scoreprint("Score with default parameters : ", train_and_eval(X, y))def objective(trial, X_param, y_param):
C = trial.suggest_float("C", 0.01, 100.0, log=True)
penalty = trial.suggest_categorical("penalty", ["none", "l1", "l2"])
fit_intercept = trial.suggest_categorical("fit_intercept", [True, False])
score = train_and_eval(
X_param, y_param, penalty=penalty, C=C, fit_intercept=fit_intercept
)
return scorestorage = optuna.integration.DaskStorage()
study = optuna.create_study(
sampler=optuna.samplers.TPESampler(seed=142),
study_name=study_name,
direction="minimize",
storage=storage,
)
# Optimize in parallel on your Dask cluster
#
# Submit `n_workers` optimization tasks, where each task runs about 40 optimization trials
# for a total of about N_TRIALS trials in all
futures = [
c.submit(
study.optimize,
lambda trial: objective(trial, X, y),
n_trials=N_TRIALS // n_workers,
pure=False,
)
for _ in range(n_workers)
]
wait(futures)
print(f"Best params: {study.best_params}")
print("Number of finished trials: ", len(study.trials))
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/xgboost-gpu-hpo-job-parallel-ngc/notebook.ipynb
|
from dask.distributed import Client
client = Client("ws://localhost:8786")
clientn_workers = len(client.scheduler_info()["workers"])def objective(trial):
x = trial.suggest_uniform("x", -10, 10)
return (x - 2) ** 2import optuna
from dask.distributed import wait
# Number of hyperparameter combinations to try in parallel
n_trials = 100
# Optimize in parallel on your Dask cluster
backend_storage = optuna.storages.InMemoryStorage()
dask_storage = optuna.integration.DaskStorage(storage=backend_storage, client=client)
study = optuna.create_study(direction="minimize", storage=dask_storage)
futures = []
for i in range(0, n_trials, n_workers * 4):
iter_range = (i, min([i + n_workers * 4, n_trials]))
futures.append(
{
"range": iter_range,
"futures": [
client.submit(study.optimize, objective, n_trials=1, pure=False)
for _ in range(*iter_range)
],
}
)
for partition in futures:
iter_range = partition["range"]
print(f"Testing hyperparameter combinations {iter_range[0]}..{iter_range[1]}")
_ = wait(partition["futures"])study.best_paramsstudy.best_valuefrom sklearn.datasets import load_breast_cancer
from sklearn.model_selection import cross_val_score, KFold
import xgboost as xgb
from optuna.samplers import RandomSampler
def objective(trial):
X, y = load_breast_cancer(return_X_y=True)
params = {
"n_estimators": 10,
"verbosity": 0,
"tree_method": "gpu_hist",
# L2 regularization weight.
"lambda": trial.suggest_float("lambda", 1e-8, 100.0, log=True),
# L1 regularization weight.
"alpha": trial.suggest_float("alpha", 1e-8, 100.0, log=True),
# sampling according to each tree.
"colsample_bytree": trial.suggest_float("colsample_bytree", 0.2, 1.0),
"max_depth": trial.suggest_int("max_depth", 2, 10, step=1),
# minimum child weight, larger the term more conservative the tree.
"min_child_weight": trial.suggest_float(
"min_child_weight", 1e-8, 100, log=True
),
"learning_rate": trial.suggest_float("learning_rate", 1e-8, 1.0, log=True),
# defines how selective algorithm is.
"gamma": trial.suggest_float("gamma", 1e-8, 1.0, log=True),
"grow_policy": "depthwise",
"eval_metric": "logloss",
}
clf = xgb.XGBClassifier(**params)
fold = KFold(n_splits=5, shuffle=True, random_state=0)
score = cross_val_score(clf, X, y, cv=fold, scoring="neg_log_loss")
return score.mean()# Number of hyperparameter combinations to try in parallel
n_trials = 1000
# Optimize in parallel on your Dask cluster
backend_storage = optuna.storages.InMemoryStorage()
dask_storage = optuna.integration.DaskStorage(storage=backend_storage, client=client)
study = optuna.create_study(
direction="maximize", sampler=RandomSampler(seed=0), storage=dask_storage
)
futures = []
for i in range(0, n_trials, n_workers * 4):
iter_range = (i, min([i + n_workers * 4, n_trials]))
futures.append(
{
"range": iter_range,
"futures": [
client.submit(study.optimize, objective, n_trials=1, pure=False)
for _ in range(*iter_range)
],
}
)
for partition in futures:
iter_range = partition["range"]
print(f"Testing hyperparameter combinations {iter_range[0]}..{iter_range[1]}")
_ = wait(partition["futures"])study.best_paramsstudy.best_valuefrom optuna.visualization.matplotlib import (
plot_optimization_history,
plot_param_importances,
)plot_optimization_history(study)plot_param_importances(study)
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/xgboost-randomforest-gpu-hpo-dask/notebook.ipynb
|
import warnings
warnings.filterwarnings("ignore") # Reduce number of messages/warnings displayedimport time
import cudf
import cuml
import numpy as np
import pandas as pd
import xgboost as xgb
import dask_ml.model_selection as dcv
from dask.distributed import Client, wait
from dask_cuda import LocalCUDACluster
from sklearn import datasets
from sklearn.metrics import make_scorer
from cuml.ensemble import RandomForestClassifier
from cuml.model_selection import train_test_split
from cuml.metrics.accuracy import accuracy_score
import os
from urllib.request import urlretrieve
import gzipcluster = LocalCUDACluster()
client = Client(cluster)
clientdata_dir = "./rapids_hpo/data/"
file_name = "airlines.parquet"
parquet_name = os.path.join(data_dir, file_name)parquet_namedef prepare_dataset(use_full_dataset=False):
global file_path, data_dir
if use_full_dataset:
url = "https://data.rapids.ai/cloud-ml/airline_20000000.parquet"
else:
url = "https://data.rapids.ai/cloud-ml/airline_small.parquet"
if os.path.isfile(parquet_name):
print(f" > File already exists. Ready to load at {parquet_name}")
else:
# Ensure folder exists
os.makedirs(data_dir, exist_ok=True)
def data_progress_hook(block_number, read_size, total_filesize):
if (block_number % 1000) == 0:
print(
f" > percent complete: { 100 * ( block_number * read_size ) / total_filesize:.2f}\r",
end="",
)
return
urlretrieve(
url=url,
filename=parquet_name,
reporthook=data_progress_hook,
)
print(f" > Download complete {file_name}")
input_cols = [
"Year",
"Month",
"DayofMonth",
"DayofWeek",
"CRSDepTime",
"CRSArrTime",
"UniqueCarrier",
"FlightNum",
"ActualElapsedTime",
"Origin",
"Dest",
"Distance",
"Diverted",
]
dataset = cudf.read_parquet(parquet_name)
# encode categoricals as numeric
for col in dataset.select_dtypes(["object"]).columns:
dataset[col] = dataset[col].astype("category").cat.codes.astype(np.int32)
# cast all columns to int32
for col in dataset.columns:
dataset[col] = dataset[col].astype(np.float32) # needed for random forest
# put target/label column first [ classic XGBoost standard ]
output_cols = ["ArrDelayBinary"] + input_cols
dataset = dataset.reindex(columns=output_cols)
return datasetdf = prepare_dataset()import time
from contextlib import contextmanager
# Helping time blocks of code
@contextmanager
def timed(txt):
t0 = time.time()
yield
t1 = time.time()
print("%32s time: %8.5f" % (txt, t1 - t0))# Define some default values to make use of across the notebook for a fair comparison
N_FOLDS = 5
N_ITER = 25label = "ArrDelayBinary"X_train, X_test, y_train, y_test = train_test_split(df, label, test_size=0.2)X_cpu = X_train.to_pandas()
y_cpu = y_train.to_numpy()
X_test_cpu = X_test.to_pandas()
y_test_cpu = y_test.to_numpy()def accuracy_score_wrapper(y, y_hat):
"""
A wrapper function to convert labels to float32,
and pass it to accuracy_score.
Params:
- y: The y labels that need to be converted
- y_hat: The predictions made by the model
"""
y = y.astype("float32") # cuML RandomForest needs the y labels to be float32
return accuracy_score(y, y_hat, convert_dtype=True)
accuracy_wrapper_scorer = make_scorer(accuracy_score_wrapper)
cuml_accuracy_scorer = make_scorer(accuracy_score, convert_dtype=True)def do_HPO(model, gridsearch_params, scorer, X, y, mode="gpu-Grid", n_iter=10):
"""
Perform HPO based on the mode specified
mode: default gpu-Grid. The possible options are:
1. gpu-grid: Perform GPU based GridSearchCV
2. gpu-random: Perform GPU based RandomizedSearchCV
n_iter: specified with Random option for number of parameter settings sampled
Returns the best estimator and the results of the search
"""
if mode == "gpu-grid":
print("gpu-grid selected")
clf = dcv.GridSearchCV(model, gridsearch_params, cv=N_FOLDS, scoring=scorer)
elif mode == "gpu-random":
print("gpu-random selected")
clf = dcv.RandomizedSearchCV(
model, gridsearch_params, cv=N_FOLDS, scoring=scorer, n_iter=n_iter
)
else:
print("Unknown Option, please choose one of [gpu-grid, gpu-random]")
return None, None
res = clf.fit(X, y)
print(
"Best clf and score {} {}\n---\n".format(res.best_estimator_, res.best_score_)
)
return res.best_estimator_, resdef print_acc(model, X_train, y_train, X_test, y_test, mode_str="Default"):
"""
Trains a model on the train data provided, and prints the accuracy of the trained model.
mode_str: User specifies what model it is to print the value
"""
y_pred = model.fit(X_train, y_train).predict(X_test)
score = accuracy_score(y_pred, y_test.astype("float32"), convert_dtype=True)
print("{} model accuracy: {}".format(mode_str, score))X_train.shapemodel_gpu_xgb_ = xgb.XGBClassifier(tree_method="gpu_hist")
print_acc(model_gpu_xgb_, X_train, y_cpu, X_test, y_test_cpu)# For xgb_model
model_gpu_xgb = xgb.XGBClassifier(tree_method="gpu_hist")
# More range
params_xgb = {
"max_depth": np.arange(start=3, stop=12, step=3), # Default = 6
"alpha": np.logspace(-3, -1, 5), # default = 0
"learning_rate": [0.05, 0.1, 0.15], # default = 0.3
"min_child_weight": np.arange(start=2, stop=10, step=3), # default = 1
"n_estimators": [100, 200, 1000],
}mode = "gpu-random"
with timed("XGB-" + mode):
res, results = do_HPO(
model_gpu_xgb,
params_xgb,
cuml_accuracy_scorer,
X_train,
y_cpu,
mode=mode,
n_iter=N_ITER,
)
print("Searched over {} parameters".format(len(results.cv_results_["mean_test_score"])))print_acc(res, X_train, y_cpu, X_test, y_test_cpu, mode_str=mode)mode = "gpu-grid"
with timed("XGB-" + mode):
res, results = do_HPO(
model_gpu_xgb, params_xgb, cuml_accuracy_scorer, X_train, y_cpu, mode=mode
)
print("Searched over {} parameters".format(len(results.cv_results_["mean_test_score"])))print_acc(res, X_train, y_cpu, X_test, y_test_cpu, mode_str=mode)from cuml.experimental.hyperopt_utils import plotting_utilsplotting_utils.plot_search_results(results)df_gridsearch = pd.DataFrame(results.cv_results_)
plotting_utils.plot_heatmap(df_gridsearch, "param_max_depth", "param_n_estimators")## Random Forest
model_rf_ = RandomForestClassifier()
params_rf = {
"max_depth": np.arange(start=3, stop=15, step=2), # Default = 6
"max_features": [0.1, 0.50, 0.75, "auto"], # default = 0.3
"n_estimators": [100, 200, 500, 1000],
}
for col in X_train.columns:
X_train[col] = X_train[col].astype("float32")
y_train = y_train.astype("int32")print(
"Default acc: ",
accuracy_score(model_rf_.fit(X_train, y_train).predict(X_test), y_test),
)mode = "gpu-random"
model_rf = RandomForestClassifier()
with timed("RF-" + mode):
res, results = do_HPO(
model_rf,
params_rf,
cuml_accuracy_scorer,
X_train,
y_cpu,
mode=mode,
n_iter=N_ITER,
)
print("Searched over {} parameters".format(len(results.cv_results_["mean_test_score"])))print("Improved acc: ", accuracy_score(res.predict(X_test), y_test))df_gridsearch = pd.DataFrame(results.cv_results_)
plotting_utils.plot_heatmap(df_gridsearch, "param_max_depth", "param_n_estimators")
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/entrypoint.sh
|
#!/bin/bash
source activate rapids
if [[ "$1" == "serve" ]]; then
echo -e "@ entrypoint -> launching serving script \n"
python serve.py
else
echo -e "@ entrypoint -> launching training script \n"
python train.py
fi
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/train.py
|
#
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import sys
import traceback
from HPOConfig import HPOConfig
from MLWorkflow import create_workflow
def train():
hpo_config = HPOConfig(input_args=sys.argv[1:])
ml_workflow = create_workflow(hpo_config)
# cross-validation to improve robustness via multiple train/test reshuffles
for i_fold in range(hpo_config.cv_folds):
# ingest
dataset = ml_workflow.ingest_data()
# handle missing samples [ drop ]
dataset = ml_workflow.handle_missing_data(dataset)
# split into train and test set
X_train, X_test, y_train, y_test = ml_workflow.split_dataset(
dataset, random_state=i_fold
)
# train model
trained_model = ml_workflow.fit(X_train, y_train)
# use trained model to predict target labels of test data
predictions = ml_workflow.predict(trained_model, X_test)
# score test set predictions against ground truth
score = ml_workflow.score(y_test, predictions)
# save trained model [ if it sets a new-high score ]
ml_workflow.save_best_model(score, trained_model)
# restart cluster to avoid memory creep [ for multi-CPU/GPU ]
ml_workflow.cleanup(i_fold)
# emit final score to cloud HPO [i.e., SageMaker]
ml_workflow.emit_final_score()
def configure_logging():
hpo_log = logging.getLogger("hpo_log")
log_handler = logging.StreamHandler()
log_handler.setFormatter(
logging.Formatter("%(asctime)-15s %(levelname)8s %(name)s %(message)s")
)
hpo_log.addHandler(log_handler)
hpo_log.setLevel(logging.DEBUG)
hpo_log.propagate = False
if __name__ == "__main__":
configure_logging()
try:
train()
sys.exit(0) # success exit code
except Exception:
traceback.print_exc()
sys.exit(-1) # failure exit code
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/notebook.ipynb
|
%pip install --upgrade boto3import sagemaker
from helper_functions import *execution_role = sagemaker.get_execution_role()
session = sagemaker.Session()
account = !(aws sts get-caller-identity --query Account --output text)
region = !(aws configure get region)account, region# please choose dataset S3 bucket and directory
data_bucket = "sagemaker-rapids-hpo-" + region[0]
dataset_directory = "10_year" # '1_year', '3_year', '10_year', 'NYC_taxi'
# please choose output bucket for trained model(s)
model_output_bucket = session.default_bucket()s3_data_input = f"s3://{data_bucket}/{dataset_directory}"
s3_model_output = f"s3://{model_output_bucket}/trained-models"
best_hpo_model_local_save_directory = os.getcwd()# please choose learning algorithm
algorithm_choice = "XGBoost"
assert algorithm_choice in ["XGBoost", "RandomForest", "KMeans"]# please choose cross-validation folds
cv_folds = 10
assert cv_folds >= 1# please choose code variant
ml_workflow_choice = "multiGPU"
assert ml_workflow_choice in ["singleCPU", "singleGPU", "multiCPU", "multiGPU"]# please choose HPO search ranges
hyperparameter_ranges = {
"max_depth": sagemaker.parameter.IntegerParameter(5, 15),
"n_estimators": sagemaker.parameter.IntegerParameter(100, 500),
"max_features": sagemaker.parameter.ContinuousParameter(0.1, 1.0),
} # see note above for adding additional parametersif "XGBoost" in algorithm_choice:
# number of trees parameter name difference b/w XGBoost and RandomForest
hyperparameter_ranges["num_boost_round"] = hyperparameter_ranges.pop("n_estimators")if "KMeans" in algorithm_choice:
hyperparameter_ranges = {
"n_clusters": sagemaker.parameter.IntegerParameter(2, 20),
"max_iter": sagemaker.parameter.IntegerParameter(100, 500),
}# please choose HPO search strategy
search_strategy = "Random"
assert search_strategy in ["Random", "Bayesian"]# please choose total number of HPO experiments[ we have set this number very low to allow for automated CI testing ]
max_jobs = 100# please choose number of experiments that can run in parallel
max_parallel_jobs = 10max_duration_of_experiment_seconds = 60 * 60 * 24# we will recommend a compute instance type, feel free to modify
instance_type = recommend_instance_type(ml_workflow_choice, dataset_directory)# please choose whether spot instances should be used
use_spot_instances_flag = Truesummarize_choices(
s3_data_input,
s3_model_output,
ml_workflow_choice,
algorithm_choice,
cv_folds,
instance_type,
use_spot_instances_flag,
search_strategy,
max_jobs,
max_parallel_jobs,
max_duration_of_experiment_seconds,
)# %load train.py# %load workflows/MLWorkflowSingleGPU.pyrapids_base_container = "{{ rapids_container }}"image_base = "rapids-sagemaker-mnmg-100"
image_tag = rapids_base_container.split(":")[1]ecr_fullname = (
f"{account[0]}.dkr.ecr.{region[0]}.amazonaws.com/{image_base}:{image_tag}"
)ecr_fullnamewith open("Dockerfile", "w") as dockerfile:
dockerfile.writelines(
f"FROM {rapids_base_container} \n\n"
f'ENV AWS_DATASET_DIRECTORY="{dataset_directory}"\n'
f'ENV AWS_ALGORITHM_CHOICE="{algorithm_choice}"\n'
f'ENV AWS_ML_WORKFLOW_CHOICE="{ml_workflow_choice}"\n'
f'ENV AWS_CV_FOLDS="{cv_folds}"\n'
)%%writefile -a Dockerfile
# ensure printed output/log-messages retain correct order
ENV PYTHONUNBUFFERED=True
# add sagemaker-training-toolkit [ requires build tools ], flask [ serving ], and dask-ml
RUN apt-get update && apt-get install -y --no-install-recommends build-essential \
&& source activate rapids \
&& pip3 install sagemaker-training cupy-cuda11x flask dask-ml \
&& pip3 install --upgrade protobuf
# path where SageMaker looks for code when container runs in the cloud
ENV CLOUD_PATH="/opt/ml/code"
# copy our latest [local] code into the container
COPY . $CLOUD_PATH
# make the entrypoint script executable
RUN chmod +x $CLOUD_PATH/entrypoint.sh
WORKDIR $CLOUD_PATH
ENTRYPOINT ["./entrypoint.sh"]validate_dockerfile(rapids_base_container)
!cat Dockerfile%%time
!docker build -t $ecr_fullname --build-arg RAPIDS_IMAGE={{ rapids_container }} .docker_login_str = !(aws ecr get-login --region {region[0]} --no-include-email)repository_query = !(aws ecr describe-repositories --repository-names $image_base)
if repository_query[0] == "":
!(aws ecr create-repository --repository-name $image_base)# 'volume_size' - EBS volume size in GB, default = 30
estimator_params = {
"image_uri": ecr_fullname,
"role": execution_role,
"instance_type": instance_type,
"instance_count": 2,
"input_mode": "File",
"output_path": s3_model_output,
"use_spot_instances": use_spot_instances_flag,
"max_run": max_duration_of_experiment_seconds, # 24 hours
"sagemaker_session": session,
}
if use_spot_instances_flag == True:
estimator_params.update({"max_wait": max_duration_of_experiment_seconds + 1})estimator = sagemaker.estimator.Estimator(**estimator_params)summarize_choices(
s3_data_input,
s3_model_output,
ml_workflow_choice,
algorithm_choice,
cv_folds,
instance_type,
use_spot_instances_flag,
search_strategy,
max_jobs,
max_parallel_jobs,
max_duration_of_experiment_seconds,
)job_name = new_job_name_from_config(
dataset_directory,
region,
ml_workflow_choice,
algorithm_choice,
cv_folds,
instance_type,
)estimator.fit(inputs=s3_data_input, job_name=job_name.lower())metric_definitions = [{"Name": "final-score", "Regex": "final-score: (.*);"}]objective_metric_name = "final-score"hpo = sagemaker.tuner.HyperparameterTuner(
estimator=estimator,
metric_definitions=metric_definitions,
objective_metric_name=objective_metric_name,
objective_type="Maximize",
hyperparameter_ranges=hyperparameter_ranges,
strategy=search_strategy,
max_jobs=max_jobs,
max_parallel_jobs=max_parallel_jobs,
)summarize_choices(
s3_data_input,
s3_model_output,
ml_workflow_choice,
algorithm_choice,
cv_folds,
instance_type,
use_spot_instances_flag,
search_strategy,
max_jobs,
max_parallel_jobs,
max_duration_of_experiment_seconds,
)# tuning_job_name = new_job_name_from_config(dataset_directory, region, ml_workflow_choice,
# algorithm_choice, cv_folds,
# # instance_type)
# hpo.fit( inputs=s3_data_input,
# job_name=tuning_job_name,
# wait=True,
# logs='All')
# hpo.wait() # block until the .fit call above is completedtuning_job_name = "air-mGPU-XGB-10cv-527fd372fa4d8d"hpo_results = summarize_hpo_results(tuning_job_name)sagemaker.HyperparameterTuningJobAnalytics(tuning_job_name).dataframe()local_filename, s3_path_to_best_model = download_best_model(
model_output_bucket,
s3_model_output,
hpo_results,
best_hpo_model_local_save_directory,
)endpoint_model = sagemaker.model.Model(
image_uri=ecr_fullname, role=execution_role, model_data=s3_path_to_best_model
)ecr_fullnameDEMO_SERVING_FLAG = True
if DEMO_SERVING_FLAG:
endpoint_model.deploy(
initial_instance_count=1, instance_type="ml.g4dn.2xlarge"
) #'ml.p3.2xlarge'if DEMO_SERVING_FLAG:
predictor = sagemaker.predictor.Predictor(
endpoint_name=str(endpoint_model.endpoint_name), sagemaker_session=session
)
if dataset_directory in ["1_year", "3_year", "10_year"]:
on_time_example = [
2019.0,
4.0,
12.0,
2.0,
3647.0,
20452.0,
30977.0,
33244.0,
1943.0,
-9.0,
0.0,
75.0,
491.0,
] # 9 minutes early departure
late_example = [
2018.0,
3.0,
9.0,
5.0,
2279.0,
20409.0,
30721.0,
31703.0,
733.0,
123.0,
1.0,
61.0,
200.0,
]
example_payload = str(list([on_time_example, late_example]))
else:
example_payload = "" # fill in a sample payload
result = predictor.predict(example_payload)
print(result)# if DEMO_SERVING_FLAG:
# predictor.delete_endpoint()
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/MLWorkflow.py
|
#
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import functools
import logging
import time
from abc import abstractmethod
hpo_log = logging.getLogger("hpo_log")
def create_workflow(hpo_config):
"""Workflow Factory [instantiate MLWorkflow based on config]"""
if hpo_config.compute_type == "single-CPU":
from workflows.MLWorkflowSingleCPU import MLWorkflowSingleCPU
return MLWorkflowSingleCPU(hpo_config)
if hpo_config.compute_type == "multi-CPU":
from workflows.MLWorkflowMultiCPU import MLWorkflowMultiCPU
return MLWorkflowMultiCPU(hpo_config)
if hpo_config.compute_type == "single-GPU":
from workflows.MLWorkflowSingleGPU import MLWorkflowSingleGPU
return MLWorkflowSingleGPU(hpo_config)
if hpo_config.compute_type == "multi-GPU":
from workflows.MLWorkflowMultiGPU import MLWorkflowMultiGPU
return MLWorkflowMultiGPU(hpo_config)
class MLWorkflow:
@abstractmethod
def ingest_data(self):
pass
@abstractmethod
def handle_missing_data(self, dataset):
pass
@abstractmethod
def split_dataset(self, dataset, i_fold):
pass
@abstractmethod
def fit(self, X_train, y_train):
pass
@abstractmethod
def predict(self, trained_model, X_test):
pass
@abstractmethod
def score(self, y_test, predictions):
pass
@abstractmethod
def save_trained_model(self, score, trained_model):
pass
@abstractmethod
def cleanup(self, i_fold):
pass
@abstractmethod
def emit_final_score(self):
pass
def timer_decorator(target_function):
@functools.wraps(target_function)
def timed_execution_wrapper(*args, **kwargs):
start_time = time.perf_counter()
result = target_function(*args, **kwargs)
exec_time = time.perf_counter() - start_time
hpo_log.info(
f" --- {target_function.__name__}" f" completed in {exec_time:.5f} s"
)
return result
return timed_execution_wrapper
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/helper_functions.py
|
#
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import random
import traceback
import uuid
import boto3
def recommend_instance_type(code_choice, dataset_directory):
"""
Based on the code and [airline] dataset-size choices we recommend
instance types that we've tested and are known to work.
Feel free to ignore/make a different choice.
"""
recommended_instance_type = None
if "CPU" in code_choice and dataset_directory in [
"1_year",
"3_year",
"NYC_taxi",
]: # noqa
detail_str = "16 cpu cores, 64GB memory"
recommended_instance_type = "ml.m5.4xlarge"
elif "CPU" in code_choice and dataset_directory in ["10_year"]:
detail_str = "96 cpu cores, 384GB memory"
recommended_instance_type = "ml.m5.24xlarge"
if code_choice == "singleGPU":
detail_str = "1x GPU [ V100 ], 16GB GPU memory, 61GB CPU memory"
recommended_instance_type = "ml.p3.2xlarge"
assert dataset_directory not in ["10_year"] # ! switch to multi-GPU
elif code_choice == "multiGPU":
detail_str = "4x GPUs [ V100 ], 64GB GPU memory, 244GB CPU memory"
recommended_instance_type = "ml.p3.8xlarge"
print(
f"recommended instance type : {recommended_instance_type} \n"
f"instance details : {detail_str}"
)
return recommended_instance_type
def validate_dockerfile(rapids_base_container, dockerfile_name="Dockerfile"):
"""Validate that our desired rapids base image matches the Dockerfile"""
with open(dockerfile_name) as dockerfile_handle:
if rapids_base_container not in dockerfile_handle.read():
raise Exception(
"Dockerfile base layer [i.e. FROM statment] does"
" not match the variable rapids_base_container"
)
def summarize_choices(
s3_data_input,
s3_model_output,
code_choice,
algorithm_choice,
cv_folds,
instance_type,
use_spot_instances_flag,
search_strategy,
max_jobs,
max_parallel_jobs,
max_duration_of_experiment_seconds,
):
"""
Print the configuration choices,
often useful before submitting large jobs
"""
print(f"s3 data input =\t{s3_data_input}")
print(f"s3 model output =\t{s3_model_output}")
print(f"compute =\t{code_choice}")
print(f"algorithm =\t{algorithm_choice}, {cv_folds} cv-fold")
print(f"instance =\t{instance_type}")
print(f"spot instances =\t{use_spot_instances_flag}")
print(f"hpo strategy =\t{search_strategy}")
print(f"max_experiments =\t{max_jobs}")
print(f"max_parallel =\t{max_parallel_jobs}")
print(f"max runtime =\t{max_duration_of_experiment_seconds} sec")
def summarize_hpo_results(tuning_job_name):
"""
Query tuning results and display the best score,
parameters, and job-name
"""
hpo_results = (
boto3.Session()
.client("sagemaker")
.describe_hyper_parameter_tuning_job(
HyperParameterTuningJobName=tuning_job_name
)
)
best_job = hpo_results["BestTrainingJob"]["TrainingJobName"]
best_score = hpo_results["BestTrainingJob"][
"FinalHyperParameterTuningJobObjectiveMetric"
][
"Value"
] # noqa
best_params = hpo_results["BestTrainingJob"]["TunedHyperParameters"]
print(f"best score: {best_score}")
print(f"best params: {best_params}")
print(f"best job-name: {best_job}")
return hpo_results
def download_best_model(bucket, s3_model_output, hpo_results, local_directory):
"""Download best model from S3"""
try:
target_bucket = boto3.resource("s3").Bucket(bucket)
path_prefix = os.path.join(
s3_model_output.split("/")[-1],
hpo_results["BestTrainingJob"]["TrainingJobName"],
"output",
)
objects = target_bucket.objects.filter(Prefix=path_prefix)
for obj in objects:
path, filename = os.path.split(obj.key)
local_filename = os.path.join(local_directory, "best_" + filename)
s3_path_to_model = os.path.join("s3://", bucket, path_prefix, filename)
target_bucket.download_file(obj.key, local_filename)
print(
f"Successfully downloaded best model\n"
f"> filename: {local_filename}\n"
f"> local directory : {local_directory}\n\n"
f"full S3 path : {s3_path_to_model}"
)
return local_filename, s3_path_to_model
except Exception as download_error:
print(f"! Unable to download best model: {download_error}")
return None
def new_job_name_from_config(
dataset_directory,
region,
code_choice,
algorithm_choice,
cv_folds,
instance_type,
trim_limit=32,
):
"""
Build a jobname string that captures the HPO configuration options.
This is helpful for intepreting logs and for general book-keeping
"""
job_name = None
try:
if dataset_directory in ["1_year", "3_year", "10_year"]:
data_choice_str = "air"
validate_region(region)
elif dataset_directory in ["NYC_taxi"]:
data_choice_str = "nyc"
validate_region(region)
else:
data_choice_str = "byo"
code_choice_str = code_choice[0] + code_choice[-3:]
if "randomforest" in algorithm_choice.lower():
algorithm_choice_str = "RF"
if "xgboost" in algorithm_choice.lower():
algorithm_choice_str = "XGB"
if "kmeans" in algorithm_choice.lower():
algorithm_choice_str = "KMeans"
# instance_type_str = '-'.join(instance_type.split('.')[1:])
random_str = "".join(random.choices(uuid.uuid4().hex, k=trim_limit))
job_name = (
f"{data_choice_str}-{code_choice_str}"
f"-{algorithm_choice_str}-{cv_folds}cv"
f"-{random_str}"
)
job_name = job_name[:trim_limit]
print(f"generated job name : {job_name}\n")
except Exception:
traceback.print_exc()
return job_name
def validate_region(region):
"""
Check that the current [compute] region is one of the
two regions where the demo data is hosted
"""
if isinstance(region, list):
region = region[0]
if region not in ["us-east-1", "us-west-2"]:
raise Exception(
"Unsupported region based on demo data location,"
" please switch to us-east-1 or us-west-2"
)
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/Dockerfile
|
ARG RAPIDS_IMAGE
FROM $RAPIDS_IMAGE as rapids
ENV AWS_DATASET_DIRECTORY="10_year"
ENV AWS_ALGORITHM_CHOICE="XGBoost"
ENV AWS_ML_WORKFLOW_CHOICE="multiGPU"
ENV AWS_CV_FOLDS="10"
# ensure printed output/log-messages retain correct order
ENV PYTHONUNBUFFERED=True
# add sagemaker-training-toolkit [ requires build tools ], flask [ serving ], and dask-ml
RUN apt-get update && apt-get install -y --no-install-recommends build-essential \
&& source activate rapids \
&& pip3 install sagemaker-training cupy-cuda11x flask dask-ml \
&& pip3 install --upgrade protobuf
# path where SageMaker looks for code when container runs in the cloud
ENV CLOUD_PATH="/opt/ml/code"
# copy our latest [local] code into the container
COPY . $CLOUD_PATH
# make the entrypoint script executable
RUN chmod +x $CLOUD_PATH/entrypoint.sh
WORKDIR $CLOUD_PATH
ENTRYPOINT ["./entrypoint.sh"]
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/HPOConfig.py
|
#
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import argparse
import glob
import logging
import os
import pprint
import HPODatasets
hpo_log = logging.getLogger("hpo_log")
class HPOConfig:
"""Cloud integrated RAPIDS HPO functionality with AWS SageMaker focus"""
sagemaker_directory_structure = {
"train_data": "/opt/ml/input/data/training",
"model_store": "/opt/ml/model",
"output_artifacts": "/opt/ml/output",
}
def __init__(
self,
input_args,
directory_structure=sagemaker_directory_structure,
worker_limit=None,
):
# parse configuration from job-name
(
self.dataset_type,
self.model_type,
self.compute_type,
self.cv_folds,
) = self.parse_configuration()
# parse input parameters for HPO
self.model_params = self.parse_hyper_parameter_inputs(input_args)
# parse dataset files/paths and dataset columns, labels, dtype
(
self.target_files,
self.input_file_type,
self.dataset_columns,
self.label_column,
self.dataset_dtype,
) = self.detect_data_inputs(directory_structure)
self.model_store_directory = directory_structure["model_store"]
self.output_artifacts_directory = directory_structure[
"output_artifacts"
] # noqa
def parse_configuration(self):
"""Parse the ENV variables [ set in the dockerfile ]
to determine configuration settings"""
hpo_log.info("\nparsing configuration from environment settings...")
dataset_type = "Airline"
model_type = "RandomForest"
compute_type = "single-GPU"
cv_folds = 3
try:
# parse dataset choice
dataset_selection = os.environ["AWS_DATASET_DIRECTORY"].lower()
if dataset_selection in ["1_year", "3_year", "10_year"]:
dataset_type = "Airline"
elif dataset_selection in ["nyc_taxi"]:
dataset_type = "NYCTaxi"
else:
dataset_type = "BYOData"
# parse model type
model_selection = os.environ["AWS_ALGORITHM_CHOICE"].lower()
if model_selection in ["randomforest"]:
model_type = "RandomForest"
elif model_selection in ["xgboost"]:
model_type = "XGBoost"
elif model_selection in ["kmeans"]:
model_type = "KMeans"
# parse compute choice
compute_selection = os.environ["AWS_ML_WORKFLOW_CHOICE"].lower()
if "multigpu" in compute_selection:
compute_type = "multi-GPU"
elif "multicpu" in compute_selection:
compute_type = "multi-CPU"
elif "singlecpu" in compute_selection:
compute_type = "single-CPU"
elif "singlegpu" in compute_selection:
compute_type = "single-GPU"
# parse CV folds
cv_folds = int(os.environ["AWS_CV_FOLDS"])
except KeyError as error:
hpo_log.info(f"Configuration parser failed : {error}")
assert dataset_type in ["Airline", "NYCTaxi", "BYOData"]
assert model_type in ["RandomForest", "XGBoost", "KMeans"]
assert compute_type in ["single-GPU", "multi-GPU", "single-CPU", "multi-CPU"]
assert cv_folds >= 1
hpo_log.info(
f" Dataset: {dataset_type}\n"
f" Compute: {compute_type}\n"
f" Algorithm: {model_type}\n"
f" CV_folds: {cv_folds}\n"
)
return dataset_type, model_type, compute_type, cv_folds
def parse_hyper_parameter_inputs(self, input_args):
"""Parse hyperparmeters provided by the HPO orchestrator"""
hpo_log.info(
"parsing model hyperparameters from command line arguments...log"
) # noqa
parser = argparse.ArgumentParser()
if "XGBoost" in self.model_type:
# intentionally breaking PEP8 below for argument alignment
parser.add_argument("--max_depth", type=int, default=5) # noqa
parser.add_argument("--num_boost_round", type=int, default=10) # noqa
parser.add_argument("--subsample", type=float, default=0.9) # noqa
parser.add_argument("--learning_rate", type=float, default=0.3) # noqa
parser.add_argument("--reg_lambda", type=float, default=1) # noqa
parser.add_argument("--gamma", type=float, default=0.0) # noqa
parser.add_argument("--alpha", type=float, default=0.0) # noqa
parser.add_argument("--seed", type=int, default=0) # noqa
args, unknown_args = parser.parse_known_args(input_args)
model_params = {
"max_depth": args.max_depth,
"num_boost_round": args.num_boost_round,
"learning_rate": args.learning_rate,
"gamma": args.gamma,
"lambda": args.reg_lambda,
"random_state": args.seed,
"verbosity": 0,
"seed": args.seed,
"objective": "binary:logistic",
}
if "single-CPU" in self.compute_type:
model_params.update({"nthreads": os.cpu_count()})
if "GPU" in self.compute_type:
model_params.update({"tree_method": "gpu_hist"})
else:
model_params.update({"tree_method": "hist"})
elif "RandomForest" in self.model_type:
# intentionally breaking PEP8 below for argument alignment
parser.add_argument("--max_depth", type=int, default=5) # noqa
parser.add_argument("--n_estimators", type=int, default=10) # noqa
parser.add_argument("--max_features", type=float, default=1.0) # noqa
parser.add_argument("--n_bins", type=float, default=64) # noqa
parser.add_argument("--bootstrap", type=bool, default=True) # noqa
parser.add_argument("--random_state", type=int, default=0) # noqa
args, unknown_args = parser.parse_known_args(input_args)
model_params = {
"max_depth": args.max_depth,
"n_estimators": args.n_estimators,
"max_features": args.max_features,
"n_bins": args.n_bins,
"bootstrap": args.bootstrap,
"random_state": args.random_state,
}
elif "KMeans" in self.model_type:
parser.add_argument("--n_clusters", type=int, default=8)
parser.add_argument("--max_iter", type=int, default=300)
parser.add_argument("--random_state", type=int, default=1)
compute_selection = os.environ["AWS_ML_WORKFLOW_CHOICE"].lower()
if "gpu" in compute_selection: # 'singlegpu' or 'multigpu'
parser.add_argument("--init", type=str, default="scalable-k-means++")
elif "cpu" in compute_selection:
parser.add_argument("--init", type=str, default="k-means++")
args, unknown_args = parser.parse_known_args(input_args)
model_params = {
"n_clusters": args.n_clusters,
"max_iter": args.max_iter,
"random_state": args.random_state,
"init": args.init,
}
else:
raise Exception(f"!error: unknown model type {self.model_type}")
hpo_log.info(pprint.pformat(model_params, indent=5))
return model_params
def detect_data_inputs(self, directory_structure):
"""
Scan mounted data directory to determine files to ingest.
Notes: single-CPU pandas read_parquet needs a directory input
single-GPU cudf read_parquet needs a list of files
multi-CPU/GPU can accept either a list or a directory
"""
parquet_files = glob.glob(
os.path.join(directory_structure["train_data"], "*.parquet")
)
csv_files = glob.glob(os.path.join(directory_structure["train_data"], "*.csv"))
if len(csv_files):
hpo_log.info("CSV input files detected")
target_files = csv_files
input_file_type = "CSV"
elif len(parquet_files):
hpo_log.info("Parquet input files detected")
"""
if 'single-CPU' in self.compute_type:
# pandas read_parquet needs a directory input - no longer the case with newest pandas
target_files = directory_structure['train_data'] + '/'
else:
"""
target_files = parquet_files
input_file_type = "Parquet"
else:
raise Exception("! No [CSV or Parquet] input files detected")
n_datafiles = len(target_files)
assert n_datafiles > 0
pprint.pprint(target_files)
hpo_log.info(f"detected {n_datafiles} files as input")
if "Airline" in self.dataset_type:
dataset_columns = HPODatasets.airline_feature_columns
dataset_label_column = HPODatasets.airline_label_column
dataset_dtype = HPODatasets.airline_dtype
elif "NYCTaxi" in self.dataset_type:
dataset_columns = HPODatasets.nyctaxi_feature_columns
dataset_label_column = HPODatasets.nyctaxi_label_column
dataset_dtype = HPODatasets.nyctaxi_dtype
elif "BYOData" in self.dataset_type:
dataset_columns = HPODatasets.BYOD_feature_columns
dataset_label_column = HPODatasets.BYOD_label_column
dataset_dtype = HPODatasets.BYOD_dtype
return (
target_files,
input_file_type,
dataset_columns,
dataset_label_column,
dataset_dtype,
)
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/HPODatasets.py
|
""" Airline Dataset target label and feature column names """
airline_label_column = "ArrDel15"
airline_feature_columns = [
"Year",
"Quarter",
"Month",
"DayOfWeek",
"Flight_Number_Reporting_Airline",
"DOT_ID_Reporting_Airline",
"OriginCityMarketID",
"DestCityMarketID",
"DepTime",
"DepDelay",
"DepDel15",
"ArrDel15",
"AirTime",
"Distance",
]
airline_dtype = "float32"
""" NYC TLC Trip Record Data target label and feature column names """
nyctaxi_label_column = "above_average_tip"
nyctaxi_feature_columns = [
"VendorID",
"tpep_pickup_datetime",
"tpep_dropoff_datetime",
"passenger_count",
"trip_distance",
"RatecodeID",
"store_and_fwd_flag",
"PULocationID",
"DOLocationID",
"payment_type",
"fare_amount",
"extra",
"mta_tax",
"tolls_amount",
"improvement_surcharge",
"total_amount",
"congestion_surcharge",
"above_average_tip",
]
nyctaxi_dtype = "float32"
""" Insert your dataset here! """
BYOD_label_column = "" # e.g., nyctaxi_label_column
BYOD_feature_columns = [] # e.g., nyctaxi_feature_columns
BYOD_dtype = None # e.g., nyctaxi_dtype
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/serve.py
|
#
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import glob
import json
import logging
import os
import sys
import time
import traceback
from functools import lru_cache
import flask
import joblib
import numpy
import xgboost
from flask import Flask, Response
try:
"""check for GPU via library imports"""
import cupy
from cuml import ForestInference
GPU_INFERENCE_FLAG = True
except ImportError as gpu_import_error:
GPU_INFERENCE_FLAG = False
print(f"\n!GPU import error: {gpu_import_error}\n")
# set to true to print incoming request headers and data
DEBUG_FLAG = False
def serve(xgboost_threshold=0.5):
"""Flask Inference Server for SageMaker hosting of RAPIDS Models"""
app = Flask(__name__)
logging.basicConfig(level=logging.DEBUG)
if GPU_INFERENCE_FLAG:
app.logger.info("GPU Model Serving Workflow")
app.logger.info(f"> {cupy.cuda.runtime.getDeviceCount()}" f" GPUs detected \n")
else:
app.logger.info("CPU Model Serving Workflow")
app.logger.info(f"> {os.cpu_count()} CPUs detected \n")
@app.route("/ping", methods=["GET"])
def ping():
"""SageMaker required method, ping heartbeat"""
return Response(response="\n", status=200)
@lru_cache
def load_trained_model():
"""
Cached loading of trained [ XGBoost or RandomForest ] model into memory
Note: Models selected via filename parsing, edit if necessary
"""
xgb_models = glob.glob("/opt/ml/model/*_xgb")
rf_models = glob.glob("/opt/ml/model/*_rf")
kmeans_models = glob.glob("/opt/ml/model/*_kmeans")
app.logger.info(f"detected xgboost models : {xgb_models}")
app.logger.info(f"detected randomforest models : {rf_models}")
app.logger.info(f"detected kmeans models : {kmeans_models}\n\n")
model_type = None
start_time = time.perf_counter()
if len(xgb_models):
model_type = "XGBoost"
model_filename = xgb_models[0]
if GPU_INFERENCE_FLAG:
# FIL
reloaded_model = ForestInference.load(model_filename)
else:
# native XGBoost
reloaded_model = xgboost.Booster()
reloaded_model.load_model(fname=model_filename)
elif len(rf_models):
model_type = "RandomForest"
model_filename = rf_models[0]
reloaded_model = joblib.load(model_filename)
elif len(kmeans_models):
model_type = "KMeans"
model_filename = kmeans_models[0]
reloaded_model = joblib.load(model_filename)
else:
raise Exception("! No trained models detected")
exec_time = time.perf_counter() - start_time
app.logger.info(f"> model {model_filename} " f"loaded in {exec_time:.5f} s \n")
return reloaded_model, model_type, model_filename
@app.route("/invocations", methods=["POST"])
def predict():
"""
Run CPU or GPU inference on input data,
called everytime an incoming request arrives
"""
# parse user input
try:
if DEBUG_FLAG:
app.logger.debug(flask.request.headers)
app.logger.debug(flask.request.content_type)
app.logger.debug(flask.request.get_data())
string_data = json.loads(flask.request.get_data())
query_data = numpy.array(string_data)
except Exception:
return Response(
response="Unable to parse input data"
"[ should be json/string encoded list of arrays ]",
status=415,
mimetype="text/csv",
)
# cached [reloading] of trained model to process incoming requests
reloaded_model, model_type, model_filename = load_trained_model()
try:
start_time = time.perf_counter()
if model_type == "XGBoost":
app.logger.info(
"running inference using XGBoost model :" f"{model_filename}"
)
if GPU_INFERENCE_FLAG:
predictions = reloaded_model.predict(query_data)
else:
dm_deserialized_data = xgboost.DMatrix(query_data)
predictions = reloaded_model.predict(dm_deserialized_data)
predictions = (predictions > xgboost_threshold) * 1.0
elif model_type == "RandomForest":
app.logger.info(
"running inference using RandomForest model :" f"{model_filename}"
)
if "gpu" in model_filename and not GPU_INFERENCE_FLAG:
raise Exception(
"attempting to run CPU inference "
"on a GPU trained RandomForest model"
)
predictions = reloaded_model.predict(query_data.astype("float32"))
elif model_type == "KMeans":
app.logger.info(
"running inference using KMeans model :" f"{model_filename}"
)
if "gpu" in model_filename and not GPU_INFERENCE_FLAG:
raise Exception(
"attempting to run CPU inference "
"on a GPU trained KMeans model"
)
predictions = reloaded_model.predict(query_data.astype("float32"))
app.logger.info(f"\n predictions: {predictions} \n")
exec_time = time.perf_counter() - start_time
app.logger.info(f" > inference finished in {exec_time:.5f} s \n")
# return predictions
return Response(
response=json.dumps(predictions.tolist()),
status=200,
mimetype="text/csv",
)
# error during inference
except Exception as inference_error:
app.logger.error(inference_error)
return Response(
response=f"Inference failure: {inference_error}\n",
status=400,
mimetype="text/csv",
)
# initial [non-cached] reload of trained model
reloaded_model, model_type, model_filename = load_trained_model()
# trigger start of Flask app
app.run(host="0.0.0.0", port=8080)
if __name__ == "__main__":
try:
serve()
sys.exit(0) # success exit code
except Exception:
traceback.print_exc()
sys.exit(-1) # failure exit code
"""
airline model inference test [ 3 non-late flights, and a one late flight ]
curl -X POST --header "Content-Type: application/json" --data '[[ ... ]]' http://0.0.0.0:8080/invocations
"""
| 0 |
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo
|
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/workflows/MLWorkflowSingleCPU.py
|
#
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import os
import time
import joblib
import pandas
import xgboost
from MLWorkflow import MLWorkflow, timer_decorator
from sklearn.cluster import KMeans
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
hpo_log = logging.getLogger("hpo_log")
class MLWorkflowSingleCPU(MLWorkflow):
"""Single-CPU Workflow"""
def __init__(self, hpo_config):
hpo_log.info("Single-CPU Workflow")
self.start_time = time.perf_counter()
self.hpo_config = hpo_config
self.dataset_cache = None
self.cv_fold_scores = []
self.best_score = -1
@timer_decorator
def ingest_data(self):
"""Ingest dataset, CSV and Parquet supported"""
if self.dataset_cache is not None:
hpo_log.info("> skipping ingestion, using cache")
return self.dataset_cache
if "Parquet" in self.hpo_config.input_file_type:
hpo_log.info("> parquet data ingestion")
# assert isinstance(self.hpo_config.target_files, str)
filepath = self.hpo_config.target_files
dataset = pandas.read_parquet(
filepath,
columns=self.hpo_config.dataset_columns, # noqa
engine="pyarrow",
)
elif "CSV" in self.hpo_config.input_file_type:
hpo_log.info("> csv data ingestion")
if isinstance(self.hpo_config.target_files, list):
filepath = self.hpo_config.target_files[0]
elif isinstance(self.hpo_config.target_files, str):
filepath = self.hpo_config.target_files
dataset = pandas.read_csv(
filepath,
names=self.hpo_config.dataset_columns,
dtype=self.hpo_config.dataset_dtype,
header=0,
)
hpo_log.info(f"\t dataset shape: {dataset.shape}")
self.dataset_cache = dataset
return dataset
@timer_decorator
def handle_missing_data(self, dataset):
"""Drop samples with missing data [ inplace ]"""
dataset = dataset.dropna()
return dataset
@timer_decorator
def split_dataset(self, dataset, random_state):
"""
Split dataset into train and test data subsets,
currently using CV-fold index for randomness.
Plan to refactor with sklearn KFold
"""
hpo_log.info("> train-test split")
label_column = self.hpo_config.label_column
X_train, X_test, y_train, y_test = train_test_split(
dataset.loc[:, dataset.columns != label_column],
dataset[label_column],
random_state=random_state,
)
return (
X_train.astype(self.hpo_config.dataset_dtype),
X_test.astype(self.hpo_config.dataset_dtype),
y_train.astype(self.hpo_config.dataset_dtype),
y_test.astype(self.hpo_config.dataset_dtype),
)
@timer_decorator
def fit(self, X_train, y_train):
"""Fit decision tree model"""
if "XGBoost" in self.hpo_config.model_type:
hpo_log.info("> fit xgboost model")
dtrain = xgboost.DMatrix(data=X_train, label=y_train)
num_boost_round = self.hpo_config.model_params["num_boost_round"]
trained_model = xgboost.train(
dtrain=dtrain,
params=self.hpo_config.model_params,
num_boost_round=num_boost_round,
)
elif "RandomForest" in self.hpo_config.model_type:
hpo_log.info("> fit randomforest model")
trained_model = RandomForestClassifier(
n_estimators=self.hpo_config.model_params["n_estimators"],
max_depth=self.hpo_config.model_params["max_depth"],
max_features=self.hpo_config.model_params["max_features"],
bootstrap=self.hpo_config.model_params["bootstrap"],
n_jobs=-1,
).fit(X_train, y_train)
elif "KMeans" in self.hpo_config.model_type:
hpo_log.info("> fit kmeans model")
trained_model = KMeans(
n_clusters=self.hpo_config.model_params["n_clusters"],
max_iter=self.hpo_config.model_params["max_iter"],
random_state=self.hpo_config.model_params["random_state"],
init=self.hpo_config.model_params["init"],
).fit(X_train)
return trained_model
@timer_decorator
def predict(self, trained_model, X_test, threshold=0.5):
"""Inference with the trained model on the unseen test data"""
hpo_log.info("> predict with trained model ")
if "XGBoost" in self.hpo_config.model_type:
dtest = xgboost.DMatrix(X_test)
predictions = trained_model.predict(dtest)
predictions = (predictions > threshold) * 1.0
elif "RandomForest" in self.hpo_config.model_type:
predictions = trained_model.predict(X_test)
elif "KMeans" in self.hpo_config.model_type:
predictions = trained_model.predict(X_test)
return predictions
@timer_decorator
def score(self, y_test, predictions):
"""Score predictions vs ground truth labels on test data"""
dataset_dtype = self.hpo_config.dataset_dtype
score = accuracy_score(
y_test.astype(dataset_dtype), predictions.astype(dataset_dtype)
)
hpo_log.info(f"\t score = {score}")
self.cv_fold_scores.append(score)
return score
def save_best_model(self, score, trained_model, filename="saved_model"):
"""Persist/save model that sets a new high score"""
if score > self.best_score:
self.best_score = score
hpo_log.info("> saving high-scoring model")
output_filename = os.path.join(
self.hpo_config.model_store_directory, filename
)
if "XGBoost" in self.hpo_config.model_type:
trained_model.save_model(f"{output_filename}_scpu_xgb")
elif "RandomForest" in self.hpo_config.model_type:
joblib.dump(trained_model, f"{output_filename}_scpu_rf")
elif "KMeans" in self.hpo_config.model_type:
joblib.dump(trained_model, f"{output_filename}_scpu_kmeans")
def cleanup(self, i_fold):
hpo_log.info("> end of fold \n")
def emit_final_score(self):
"""Emit score for parsing by the cloud HPO orchestrator"""
exec_time = time.perf_counter() - self.start_time
hpo_log.info(f"total_time = {exec_time:.5f} s ")
if self.hpo_config.cv_folds > 1:
hpo_log.info(f"fold scores : {self.cv_fold_scores} \n")
# average over CV folds
final_score = sum(self.cv_fold_scores) / len(self.cv_fold_scores)
hpo_log.info(f"final-score: {final_score}; \n")
| 0 |
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo
|
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/workflows/MLWorkflowMultiCPU.py
|
#
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import os
import time
import warnings
import dask
import joblib
import xgboost
from dask.distributed import Client, LocalCluster, wait
from dask_ml.model_selection import train_test_split
from MLWorkflow import MLWorkflow, timer_decorator
from sklearn.cluster import KMeans
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
hpo_log = logging.getLogger("hpo_log")
warnings.filterwarnings("ignore")
class MLWorkflowMultiCPU(MLWorkflow):
"""Multi-CPU Workflow"""
def __init__(self, hpo_config):
hpo_log.info("Multi-CPU Workflow")
self.start_time = time.perf_counter()
self.hpo_config = hpo_config
self.dataset_cache = None
self.cv_fold_scores = []
self.best_score = -1
self.cluster, self.client = self.cluster_initialize()
@timer_decorator
def cluster_initialize(self):
"""Initialize dask CPU cluster"""
cluster = None
client = None
self.n_workers = os.cpu_count()
cluster = LocalCluster(n_workers=self.n_workers)
client = Client(cluster)
hpo_log.info(f"dask multi-CPU cluster with {self.n_workers} workers ")
dask.config.set(
{
"temporary_directory": self.hpo_config.output_artifacts_directory,
"logging": {
"loggers": {"distributed.nanny": {"level": "CRITICAL"}}
}, # noqa
}
)
return cluster, client
def ingest_data(self):
"""Ingest dataset, CSV and Parquet supported"""
if self.dataset_cache is not None:
hpo_log.info("> skipping ingestion, using cache")
return self.dataset_cache
if "Parquet" in self.hpo_config.input_file_type:
hpo_log.info("> parquet data ingestion")
dataset = dask.dataframe.read_parquet(
self.hpo_config.target_files, columns=self.hpo_config.dataset_columns
)
elif "CSV" in self.hpo_config.input_file_type:
hpo_log.info("> csv data ingestion")
dataset = dask.dataframe.read_csv(
self.hpo_config.target_files,
names=self.hpo_config.dataset_columns,
dtype=self.hpo_config.dataset_dtype,
header=0,
)
hpo_log.info(f"\t dataset len: {len(dataset)}")
self.dataset_cache = dataset
return dataset
def handle_missing_data(self, dataset):
"""Drop samples with missing data [ inplace ]"""
dataset = dataset.dropna()
return dataset
@timer_decorator
def split_dataset(self, dataset, random_state):
"""
Split dataset into train and test data subsets,
currently using CV-fold index for randomness.
Plan to refactor with dask_ml KFold
"""
hpo_log.info("> train-test split")
label_column = self.hpo_config.label_column
train, test = train_test_split(dataset, random_state=random_state)
# build X [ features ], y [ labels ] for the train and test subsets
y_train = train[label_column]
X_train = train.drop(label_column, axis=1)
y_test = test[label_column]
X_test = test.drop(label_column, axis=1)
# persist
X_train = X_train.persist()
y_train = y_train.persist()
wait([X_train, y_train])
return (
X_train.astype(self.hpo_config.dataset_dtype),
X_test.astype(self.hpo_config.dataset_dtype),
y_train.astype(self.hpo_config.dataset_dtype),
y_test.astype(self.hpo_config.dataset_dtype),
)
@timer_decorator
def fit(self, X_train, y_train):
"""Fit decision tree model"""
if "XGBoost" in self.hpo_config.model_type:
hpo_log.info("> fit xgboost model")
dtrain = xgboost.dask.DaskDMatrix(self.client, X_train, y_train)
num_boost_round = self.hpo_config.model_params["num_boost_round"]
xgboost_output = xgboost.dask.train(
self.client,
self.hpo_config.model_params,
dtrain,
num_boost_round=num_boost_round,
)
trained_model = xgboost_output["booster"]
elif "RandomForest" in self.hpo_config.model_type:
hpo_log.info("> fit randomforest model")
trained_model = RandomForestClassifier(
n_estimators=self.hpo_config.model_params["n_estimators"],
max_depth=self.hpo_config.model_params["max_depth"],
max_features=self.hpo_config.model_params["max_features"],
n_jobs=-1,
).fit(X_train, y_train.astype("int32"))
elif "KMeans" in self.hpo_config.model_type:
hpo_log.info("> fit kmeans model")
trained_model = KMeans(
n_clusters=self.hpo_config.model_params["n_clusters"],
max_iter=self.hpo_config.model_params["max_iter"],
random_state=self.hpo_config.model_params["random_state"],
init=self.hpo_config.model_params["init"],
n_jobs=-1, # Deprecated since version 0.23 and will be removed in 1.0 (renaming of 0.25)
).fit(X_train)
return trained_model
@timer_decorator
def predict(self, trained_model, X_test, threshold=0.5):
"""Inference with the trained model on the unseen test data"""
hpo_log.info("> predict with trained model ")
if "XGBoost" in self.hpo_config.model_type:
dtest = xgboost.dask.DaskDMatrix(self.client, X_test)
predictions = xgboost.dask.predict(self.client, trained_model, dtest)
predictions = (predictions > threshold) * 1.0
elif "RandomForest" in self.hpo_config.model_type:
predictions = trained_model.predict(X_test)
elif "KMeans" in self.hpo_config.model_type:
predictions = trained_model.predict(X_test)
return predictions
@timer_decorator
def score(self, y_test, predictions):
"""Score predictions vs ground truth labels on test data"""
hpo_log.info("> score predictions")
score = accuracy_score(
y_test.astype(self.hpo_config.dataset_dtype),
predictions.astype(self.hpo_config.dataset_dtype),
)
hpo_log.info(f"\t score = {score}")
self.cv_fold_scores.append(score)
return score
def save_best_model(self, score, trained_model, filename="saved_model"):
"""Persist/save model that sets a new high score"""
if score > self.best_score:
self.best_score = score
hpo_log.info("> saving high-scoring model")
output_filename = os.path.join(
self.hpo_config.model_store_directory, filename
)
if "XGBoost" in self.hpo_config.model_type:
trained_model.save_model(f"{output_filename}_mcpu_xgb")
elif "RandomForest" in self.hpo_config.model_type:
joblib.dump(trained_model, f"{output_filename}_mcpu_rf")
elif "KMeans" in self.hpo_config.model_type:
joblib.dump(trained_model, f"{output_filename}_mcpu_kmeans")
@timer_decorator
async def cleanup(self, i_fold):
"""
Close and restart the cluster when multiple cross validation
folds are used to prevent memory creep.
"""
if i_fold == self.hpo_config.cv_folds - 1:
hpo_log.info("> done all folds; closing cluster\n")
await self.client.close()
await self.cluster.close()
elif i_fold < self.hpo_config.cv_folds - 1:
hpo_log.info("> end of fold; reinitializing cluster\n")
await self.client.close()
await self.cluster.close()
self.cluster, self.client = self.cluster_initialize()
def emit_final_score(self):
"""Emit score for parsing by the cloud HPO orchestrator"""
exec_time = time.perf_counter() - self.start_time
hpo_log.info(f"total_time = {exec_time:.5f} s ")
if self.hpo_config.cv_folds > 1:
hpo_log.info(f"fold scores : {self.cv_fold_scores} \n")
# average over CV folds
final_score = sum(self.cv_fold_scores) / len(self.cv_fold_scores)
hpo_log.info(f"final-score: {final_score}; \n")
| 0 |
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo
|
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/workflows/MLWorkflowMultiGPU.py
|
#
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import os
import time
import warnings
import cupy
import dask
import dask_cudf
import joblib
import xgboost
from cuml.dask.cluster import KMeans
from cuml.dask.common.utils import persist_across_workers
from cuml.dask.ensemble import RandomForestClassifier
from cuml.metrics import accuracy_score
from dask.distributed import Client, wait
from dask_cuda import LocalCUDACluster
from dask_ml.model_selection import train_test_split
from MLWorkflow import MLWorkflow, timer_decorator
hpo_log = logging.getLogger("hpo_log")
warnings.filterwarnings("ignore")
class MLWorkflowMultiGPU(MLWorkflow):
"""Multi-GPU Workflow"""
def __init__(self, hpo_config):
hpo_log.info("Multi-GPU Workflow")
self.start_time = time.perf_counter()
self.hpo_config = hpo_config
self.dataset_cache = None
self.cv_fold_scores = []
self.best_score = -1
self.cluster, self.client = self.cluster_initialize()
@timer_decorator
def cluster_initialize(self):
"""Initialize dask GPU cluster"""
cluster = None
client = None
self.n_workers = cupy.cuda.runtime.getDeviceCount()
cluster = LocalCUDACluster(n_workers=self.n_workers)
client = Client(cluster)
hpo_log.info(f"dask multi-GPU cluster with {self.n_workers} workers ")
dask.config.set(
{
"temporary_directory": self.hpo_config.output_artifacts_directory,
"logging": {
"loggers": {"distributed.nanny": {"level": "CRITICAL"}}
}, # noqa
}
)
return cluster, client
def ingest_data(self):
"""Ingest dataset, CSV and Parquet supported [ async/lazy ]"""
if self.dataset_cache is not None:
hpo_log.info("> skipping ingestion, using cache")
return self.dataset_cache
if "Parquet" in self.hpo_config.input_file_type:
hpo_log.info("> parquet data ingestion")
dataset = dask_cudf.read_parquet(
self.hpo_config.target_files, columns=self.hpo_config.dataset_columns
)
elif "CSV" in self.hpo_config.input_file_type:
hpo_log.info("> csv data ingestion")
dataset = dask_cudf.read_csv(
self.hpo_config.target_files,
names=self.hpo_config.dataset_columns,
header=0,
)
hpo_log.info(f"\t dataset len: {len(dataset)}")
self.dataset_cache = dataset
return dataset
def handle_missing_data(self, dataset):
"""Drop samples with missing data [ inplace ]"""
dataset = dataset.dropna()
return dataset
@timer_decorator
def split_dataset(self, dataset, random_state):
"""
Split dataset into train and test data subsets,
currently using CV-fold index for randomness.
Plan to refactor with dask_ml KFold
"""
hpo_log.info("> train-test split")
label_column = self.hpo_config.label_column
train, test = train_test_split(dataset, random_state=random_state)
# build X [ features ], y [ labels ] for the train and test subsets
y_train = train[label_column]
X_train = train.drop(label_column, axis=1)
y_test = test[label_column]
X_test = test.drop(label_column, axis=1)
# force execution
X_train, y_train, X_test, y_test = persist_across_workers(
self.client,
[X_train, y_train, X_test, y_test],
workers=self.client.has_what().keys(),
)
# wait!
wait([X_train, y_train, X_test, y_test])
return (
X_train.astype(self.hpo_config.dataset_dtype),
X_test.astype(self.hpo_config.dataset_dtype),
y_train.astype(self.hpo_config.dataset_dtype),
y_test.astype(self.hpo_config.dataset_dtype),
)
@timer_decorator
def fit(self, X_train, y_train):
"""Fit decision tree model"""
if "XGBoost" in self.hpo_config.model_type:
hpo_log.info("> fit xgboost model")
dtrain = xgboost.dask.DaskDMatrix(self.client, X_train, y_train)
num_boost_round = self.hpo_config.model_params["num_boost_round"]
xgboost_output = xgboost.dask.train(
self.client,
self.hpo_config.model_params,
dtrain,
num_boost_round=num_boost_round,
)
trained_model = xgboost_output["booster"]
elif "RandomForest" in self.hpo_config.model_type:
hpo_log.info("> fit randomforest model")
trained_model = RandomForestClassifier(
n_estimators=self.hpo_config.model_params["n_estimators"],
max_depth=self.hpo_config.model_params["max_depth"],
max_features=self.hpo_config.model_params["max_features"],
n_bins=self.hpo_config.model_params["n_bins"],
).fit(X_train, y_train.astype("int32"))
elif "KMeans" in self.hpo_config.model_type:
hpo_log.info("> fit kmeans model")
trained_model = KMeans(
n_clusters=self.hpo_config.model_params["n_clusters"],
max_iter=self.hpo_config.model_params["max_iter"],
random_state=self.hpo_config.model_params["random_state"],
init=self.hpo_config.model_params["init"],
).fit(X_train)
return trained_model
@timer_decorator
def predict(self, trained_model, X_test, threshold=0.5):
"""Inference with the trained model on the unseen test data"""
hpo_log.info("> predict with trained model ")
if "XGBoost" in self.hpo_config.model_type:
dtest = xgboost.dask.DaskDMatrix(self.client, X_test)
predictions = xgboost.dask.predict(
self.client, trained_model, dtest
).compute()
predictions = (predictions > threshold) * 1.0
elif "RandomForest" in self.hpo_config.model_type:
predictions = trained_model.predict(X_test).compute()
elif "KMeans" in self.hpo_config.model_type:
predictions = trained_model.predict(X_test).compute()
return predictions
@timer_decorator
def score(self, y_test, predictions):
"""Score predictions vs ground truth labels on test data"""
hpo_log.info("> score predictions")
y_test = y_test.compute()
score = accuracy_score(
y_test.astype(self.hpo_config.dataset_dtype),
predictions.astype(self.hpo_config.dataset_dtype),
)
hpo_log.info(f"\t score = {score}")
self.cv_fold_scores.append(score)
return score
def save_best_model(self, score, trained_model, filename="saved_model"):
"""Persist/save model that sets a new high score"""
if score > self.best_score:
self.best_score = score
hpo_log.info("> saving high-scoring model")
output_filename = os.path.join(
self.hpo_config.model_store_directory, filename
)
if "XGBoost" in self.hpo_config.model_type:
trained_model.save_model(f"{output_filename}_mgpu_xgb")
elif "RandomForest" in self.hpo_config.model_type:
joblib.dump(trained_model, f"{output_filename}_mgpu_rf")
elif "KMeans" in self.hpo_config.model_type:
joblib.dump(trained_model, f"{output_filename}_mgpu_kmeans")
@timer_decorator
async def cleanup(self, i_fold):
"""
Close and restart the cluster when multiple cross validation folds
are used to prevent memory creep.
"""
if i_fold == self.hpo_config.cv_folds - 1:
hpo_log.info("> done all folds; closing cluster")
await self.client.close()
await self.cluster.close()
elif i_fold < self.hpo_config.cv_folds - 1:
hpo_log.info("> end of fold; reinitializing cluster")
await self.client.close()
await self.cluster.close()
self.cluster, self.client = self.cluster_initialize()
def emit_final_score(self):
"""Emit score for parsing by the cloud HPO orchestrator"""
exec_time = time.perf_counter() - self.start_time
hpo_log.info(f"total_time = {exec_time:.5f} s ")
if self.hpo_config.cv_folds > 1:
hpo_log.info(f"fold scores : {self.cv_fold_scores}")
# average over CV folds
final_score = sum(self.cv_fold_scores) / len(self.cv_fold_scores)
hpo_log.info(f"final-score: {final_score};")
| 0 |
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo
|
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-hpo/workflows/MLWorkflowSingleGPU.py
|
#
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import os
import time
import cudf
import joblib
import xgboost
from cuml.cluster import KMeans
from cuml.ensemble import RandomForestClassifier
from cuml.metrics import accuracy_score
from cuml.model_selection import train_test_split
from MLWorkflow import MLWorkflow, timer_decorator
hpo_log = logging.getLogger("hpo_log")
class MLWorkflowSingleGPU(MLWorkflow):
"""Single-GPU Workflow"""
def __init__(self, hpo_config):
hpo_log.info("Single-GPU Workflow \n")
self.start_time = time.perf_counter()
self.hpo_config = hpo_config
self.dataset_cache = None
self.cv_fold_scores = []
self.best_score = -1
@timer_decorator
def ingest_data(self):
"""Ingest dataset, CSV and Parquet supported"""
if self.dataset_cache is not None:
hpo_log.info("skipping ingestion, using cache")
return self.dataset_cache
if "Parquet" in self.hpo_config.input_file_type:
dataset = cudf.read_parquet(
self.hpo_config.target_files, columns=self.hpo_config.dataset_columns
) # noqa
elif "CSV" in self.hpo_config.input_file_type:
if isinstance(self.hpo_config.target_files, list):
filepath = self.hpo_config.target_files[0]
elif isinstance(self.hpo_config.target_files, str):
filepath = self.hpo_config.target_files
hpo_log.info(self.hpo_config.dataset_columns)
dataset = cudf.read_csv(
filepath, names=self.hpo_config.dataset_columns, header=0
)
hpo_log.info(
f"ingested {self.hpo_config.input_file_type} dataset;"
f" shape = {dataset.shape}"
)
self.dataset_cache = dataset
return dataset
@timer_decorator
def handle_missing_data(self, dataset):
"""Drop samples with missing data [ inplace ]"""
dataset = dataset.dropna()
return dataset
@timer_decorator
def split_dataset(self, dataset, random_state):
"""
Split dataset into train and test data subsets,
currently using CV-fold index for randomness.
Plan to refactor with sklearn KFold
"""
hpo_log.info("> train-test split")
label_column = self.hpo_config.label_column
X_train, X_test, y_train, y_test = train_test_split(
dataset, label_column, random_state=random_state
)
return (
X_train.astype(self.hpo_config.dataset_dtype),
X_test.astype(self.hpo_config.dataset_dtype),
y_train.astype(self.hpo_config.dataset_dtype),
y_test.astype(self.hpo_config.dataset_dtype),
)
@timer_decorator
def fit(self, X_train, y_train):
"""Fit decision tree model"""
if "XGBoost" in self.hpo_config.model_type:
hpo_log.info("> fit xgboost model")
dtrain = xgboost.DMatrix(data=X_train, label=y_train)
num_boost_round = self.hpo_config.model_params["num_boost_round"]
trained_model = xgboost.train(
dtrain=dtrain,
params=self.hpo_config.model_params,
num_boost_round=num_boost_round,
)
elif "RandomForest" in self.hpo_config.model_type:
hpo_log.info("> fit randomforest model")
trained_model = RandomForestClassifier(
n_estimators=self.hpo_config.model_params["n_estimators"],
max_depth=self.hpo_config.model_params["max_depth"],
max_features=self.hpo_config.model_params["max_features"],
n_bins=self.hpo_config.model_params["n_bins"],
).fit(X_train, y_train.astype("int32"))
elif "KMeans" in self.hpo_config.model_type:
hpo_log.info("> fit kmeans model")
trained_model = KMeans(
n_clusters=self.hpo_config.model_params["n_clusters"],
max_iter=self.hpo_config.model_params["max_iter"],
random_state=self.hpo_config.model_params["random_state"],
init=self.hpo_config.model_params["init"],
).fit(X_train)
return trained_model
@timer_decorator
def predict(self, trained_model, X_test, threshold=0.5):
"""Inference with the trained model on the unseen test data"""
hpo_log.info("predict with trained model ")
if "XGBoost" in self.hpo_config.model_type:
dtest = xgboost.DMatrix(X_test)
predictions = trained_model.predict(dtest)
predictions = (predictions > threshold) * 1.0
elif "RandomForest" in self.hpo_config.model_type:
predictions = trained_model.predict(X_test)
elif "KMeans" in self.hpo_config.model_type:
predictions = trained_model.predict(X_test)
return predictions
@timer_decorator
def score(self, y_test, predictions):
"""Score predictions vs ground truth labels on test data"""
dataset_dtype = self.hpo_config.dataset_dtype
score = accuracy_score(
y_test.astype(dataset_dtype), predictions.astype(dataset_dtype)
)
hpo_log.info(f"score = {round(score,5)}")
self.cv_fold_scores.append(score)
return score
def save_best_model(self, score, trained_model, filename="saved_model"):
"""Persist/save model that sets a new high score"""
if score > self.best_score:
self.best_score = score
hpo_log.info("saving high-scoring model")
output_filename = os.path.join(
self.hpo_config.model_store_directory, filename
)
if "XGBoost" in self.hpo_config.model_type:
trained_model.save_model(f"{output_filename}_sgpu_xgb")
elif "RandomForest" in self.hpo_config.model_type:
joblib.dump(trained_model, f"{output_filename}_sgpu_rf")
elif "KMeans" in self.hpo_config.model_type:
joblib.dump(trained_model, f"{output_filename}_sgpu_kmeans")
def cleanup(self, i_fold):
hpo_log.info("end of cv-fold \n")
def emit_final_score(self):
"""Emit score for parsing by the cloud HPO orchestrator"""
exec_time = time.perf_counter() - self.start_time
hpo_log.info(f"total_time = {exec_time:.5f} s ")
if self.hpo_config.cv_folds > 1:
hpo_log.info(f"cv-fold scores : {self.cv_fold_scores} \n")
# average over CV folds
final_score = sum(self.cv_fold_scores) / len(self.cv_fold_scores)
hpo_log.info(f"final-score: {final_score}; \n")
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-higgs/notebook.ipynb
|
import sagemaker
import time
import boto3execution_role = sagemaker.get_execution_role()
session = sagemaker.Session()
region = boto3.Session().region_name
account = boto3.client("sts").get_caller_identity().get("Account")account, regions3_data_dir = session.upload_data(path="dataset", key_prefix="dataset/higgs-dataset")s3_data_direstimator_info = {
"rapids_container": "{{ rapids_container }}",
"ecr_image": "sagemaker-rapids-higgs:latest",
"ecr_repository": "sagemaker-rapids-higgs",
}%%time
!docker pull {estimator_info['rapids_container']}ECR_container_fullname = (
f"{account}.dkr.ecr.{region}.amazonaws.com/{estimator_info['ecr_image']}"
)ECR_container_fullnameprint(
f"source : {estimator_info['rapids_container']}\n"
f"destination : {ECR_container_fullname}"
)hyperparams = {
"n_estimators": 15,
"max_depth": 5,
"n_bins": 8,
"split_criterion": 0, # GINI:0, ENTROPY:1
"bootstrap": 0, # true: sample with replacement, false: sample without replacement
"max_leaves": -1, # unlimited leaves
"max_features": 0.2,
}from sagemaker.estimator import Estimator
rapids_estimator = Estimator(
image_uri=ECR_container_fullname,
role=execution_role,
instance_count=1,
instance_type="ml.p3.2xlarge", #'local_gpu'
max_run=60 * 60 * 24,
max_wait=(60 * 60 * 24) + 1,
use_spot_instances=True,
hyperparameters=hyperparams,
metric_definitions=[{"Name": "test_acc", "Regex": "test_acc: ([0-9\\.]+)"}],
)%%time
rapids_estimator.fit(inputs=s3_data_dir)from sagemaker.tuner import (
IntegerParameter,
CategoricalParameter,
ContinuousParameter,
HyperparameterTuner,
)
hyperparameter_ranges = {
"n_estimators": IntegerParameter(10, 200),
"max_depth": IntegerParameter(1, 22),
"n_bins": IntegerParameter(5, 24),
"split_criterion": CategoricalParameter([0, 1]),
"bootstrap": CategoricalParameter([True, False]),
"max_features": ContinuousParameter(0.01, 0.5),
}from sagemaker.estimator import Estimator
rapids_estimator = Estimator(
image_uri=ECR_container_fullname,
role=execution_role,
instance_count=2,
instance_type="ml.p3.8xlarge",
max_run=60 * 60 * 24,
max_wait=(60 * 60 * 24) + 1,
use_spot_instances=True,
hyperparameters=hyperparams,
metric_definitions=[{"Name": "test_acc", "Regex": "test_acc: ([0-9\\.]+)"}],
)tuner = HyperparameterTuner(
rapids_estimator,
objective_metric_name="test_acc",
hyperparameter_ranges=hyperparameter_ranges,
strategy="Bayesian",
max_jobs=2,
max_parallel_jobs=2,
objective_type="Maximize",
metric_definitions=[{"Name": "test_acc", "Regex": "test_acc: ([0-9\\.]+)"}],
)job_name = "rapidsHPO" + time.strftime("%Y-%m-%d-%H-%M-%S-%j", time.gmtime())
tuner.fit({"dataset": s3_data_dir}, job_name=job_name)aws ecr delete-repository --force --repository-name
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-higgs/rapids-higgs.py
|
#!/usr/bin/env python
import argparse
import cudf
from cuml import RandomForestClassifier as cuRF
from cuml.preprocessing.model_selection import train_test_split
from sklearn.metrics import accuracy_score
def main(args):
# SageMaker options
data_dir = args.data_dir
col_names = ["label"] + [f"col-{i}" for i in range(2, 30)] # Assign column names
dtypes_ls = ["int32"] + [
"float32" for _ in range(2, 30)
] # Assign dtypes to each column
data = cudf.read_csv(data_dir + "HIGGS.csv", names=col_names, dtype=dtypes_ls)
X_train, X_test, y_train, y_test = train_test_split(data, "label", train_size=0.70)
# Hyper-parameters
hyperparams = {
"n_estimators": args.n_estimators,
"max_depth": args.max_depth,
"n_bins": args.n_bins,
"split_criterion": args.split_criterion,
"bootstrap": args.bootstrap,
"max_leaves": args.max_leaves,
"max_features": args.max_features,
}
cu_rf = cuRF(**hyperparams)
cu_rf.fit(X_train, y_train)
print("test_acc:", accuracy_score(cu_rf.predict(X_test), y_test))
if __name__ == "__main__":
parser = argparse.ArgumentParser()
# Hyper-parameters
parser.add_argument("--n_estimators", type=int, default=20)
parser.add_argument("--max_depth", type=int, default=16)
parser.add_argument("--n_bins", type=int, default=8)
parser.add_argument("--split_criterion", type=int, default=0)
parser.add_argument("--bootstrap", type=bool, default=True)
parser.add_argument("--max_leaves", type=int, default=-1)
parser.add_argument("--max_features", type=float, default=0.2)
# SageMaker parameters
parser.add_argument("--model_output_dir", type=str, default="/opt/ml/output/")
parser.add_argument("--data_dir", type=str, default="/opt/ml/input/data/dataset/")
args = parser.parse_args()
main(args)
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/rapids-sagemaker-higgs/Dockerfile
|
ARG RAPIDS_IMAGE
FROM $RAPIDS_IMAGE as rapids
# add sagemaker-training-toolkit [ requires build tools ], flask [ serving ], and dask-ml
RUN apt-get update && apt-get install -y --no-install-recommends build-essential \
&& source activate rapids \
&& pip3 install sagemaker-training cupy-cuda11x flask \
&& pip3 install --upgrade protobuf
# Copies the training code inside the container
COPY rapids-higgs.py /opt/ml/code/rapids-higgs.py
# Defines rapids-higgs.py as script entry point
ENV SAGEMAKER_PROGRAM rapids-higgs.py
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/rapids-azureml-hpo/notebook.ipynb
|
# verify Azure ML SDK version
%pip show azure-ai-mlfrom azure.ai.ml import MLClient
from azure.identity import DefaultAzureCredential
# Get a handle to the workspace
ml_client = MLClient(
credential=DefaultAzureCredential(),
subscription_id="fc4f4a6b-4041-4b1c-8249-854d68edcf62",
resource_group_name="rapidsai-deployment",
workspace_name="rapids-aml-cluster",
)
print(
"Workspace name: " + ml_client.workspace_name,
"Subscription id: " + ml_client.subscription_id,
"Resource group: " + ml_client.resource_group_name,
sep="\n",
)datastore_name = "workspaceartifactstore"
dataset = "airline_20000000.parquet"
# Datastore uri format:
data_uri = f"azureml://subscriptions/{ml_client.subscription_id}/resourcegroups/{ml_client.resource_group_name}/workspaces/{ml_client.workspace_name}/datastores/{datastore_name}/paths/{dataset}"
print("data uri:", "\n", data_uri)from azure.ai.ml.entities import AmlCompute
# specify aml compute name.
gpu_compute_target = "rapids-cluster"
try:
# let's see if the compute target already exists
gpu_target = ml_client.compute.get(gpu_compute_target)
print(f"found compute target. Will use {gpu_compute_target}")
except:
print("Creating a new gpu compute target...")
gpu_target = AmlCompute(
name="rapids-cluster",
type="amlcompute",
size="STANDARD_NC12S_V3",
max_instances=5,
idle_time_before_scale_down=300,
)
ml_client.compute.begin_create_or_update(gpu_target).result()
print(
f"AMLCompute with name {gpu_target.name} is created, the compute size is {gpu_target.size}"
)rapids_script = "./train_rapids.py"
azure_script = "./rapids_csp_azure.py"experiment_name = "test_rapids_aml_cluster"# RUN THIS CODE ONCE TO SETUP ENVIRONMENT
from azure.ai.ml.entities import Environment, BuildContext
env_docker_image = Environment(
build=BuildContext(path=os.getcwd()),
name="rapids-mlflow",
description="RAPIDS environment with azureml-mlflow",
)
ml_client.environments.create_or_update(env_docker_image)from azure.ai.ml import command, Input
command_job = command(
environment="rapids-mlflow:1",
experiment_name=experiment_name,
code=os.getcwd(),
inputs={
"data_dir": Input(type="uri_file", path=data_uri),
"n_bins": 32,
"compute": "single-GPU", # multi-GPU for algorithms via Dask
"cv_folds": 5,
"n_estimators": 100,
"max_depth": 6,
"max_features": 0.3,
},
command="python train_rapids.py --data_dir ${{inputs.data_dir}} --n_bins ${{inputs.n_bins}} --compute ${{inputs.compute}} --cv_folds ${{inputs.cv_folds}}\
--n_estimators ${{inputs.n_estimators}} --max_depth ${{inputs.max_depth}} --max_features ${{inputs.max_features}}",
compute="rapids-cluster",
)
# submit the command
returned_job = ml_client.jobs.create_or_update(command_job)
# get a URL for the status of the job
returned_job.studio_urlfrom azure.ai.ml.sweep import Choice, Uniform, MedianStoppingPolicy
command_job_for_sweep = command_job(
n_estimators=Choice(values=range(50, 500)),
max_depth=Choice(values=range(5, 19)),
max_features=Uniform(min_value=0.2, max_value=1.0),
)
# apply sweep parameter to obtain the sweep_job
sweep_job = command_job_for_sweep.sweep(
compute="rapids-cluster",
sampling_algorithm="random",
primary_metric="Accuracy",
goal="Maximize",
)
# Define the limits for this sweep
sweep_job.set_limits(
max_total_trials=10, max_concurrent_trials=2, timeout=18000, trial_timeout=3600
)
# Specify your experiment details
sweep_job.display_name = "RF-rapids-sweep-job"
sweep_job.description = "Run RAPIDS hyperparameter sweep job"# submit the hpo job
returned_sweep_job = ml_client.create_or_update(sweep_job)aml_url = returned_sweep_job.studio_url
print("Monitor your job at", aml_url)ml_client.jobs.download(returned_sweep_job.name, output_name="model")ml_client.compute.begin_delete(gpu_compute_target).wait()
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/rapids-azureml-hpo/train_rapids.py
|
#
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import argparse
import os
import cudf
import cuml
import mlflow
import numpy as np
from rapids_csp_azure import PerfTimer, RapidsCloudML
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--data_dir", type=str, help="location of data")
parser.add_argument(
"--n_estimators", type=int, default=100, help="Number of trees in RF"
)
parser.add_argument(
"--max_depth", type=int, default=16, help="Max depth of each tree"
)
parser.add_argument(
"--n_bins",
type=int,
default=8,
help="Number of bins used in split point calculation",
)
parser.add_argument(
"--max_features",
type=float,
default=1.0,
help="Number of features for best split",
)
parser.add_argument(
"--compute",
type=str,
default="single-GPU",
help="set to multi-GPU for algorithms via dask",
)
parser.add_argument(
"--cv_folds", type=int, default=5, help="Number of CV fold splits"
)
args = parser.parse_args()
data_dir = args.data_dir
compute = args.compute
cv_folds = args.cv_folds
n_estimators = args.n_estimators
mlflow.log_param("n_estimators", np.int(args.n_estimators))
max_depth = args.max_depth
mlflow.log_param("max_depth", np.int(args.max_depth))
n_bins = args.n_bins
mlflow.log_param("n_bins", np.int(args.n_bins))
max_features = args.max_features
mlflow.log_param("max_features", np.str(args.max_features))
print("\n---->>>> cuDF version <<<<----\n", cudf.__version__)
print("\n---->>>> cuML version <<<<----\n", cuml.__version__)
azure_ml = RapidsCloudML(
cloud_type="Azure",
model_type="RandomForest",
data_type="Parquet",
compute_type=compute,
)
print(args.compute)
if compute == "single-GPU":
dataset, _, y_label, _ = azure_ml.load_data(filename=data_dir)
else:
# use parquet files from 'https://airlinedataset.blob.core.windows.net/airline-10years' for multi-GPU training
dataset, _, y_label, _ = azure_ml.load_data(
filename=os.path.join(data_dir, "part*.parquet"),
col_labels=[
"Flight_Number_Reporting_Airline",
"Year",
"Quarter",
"Month",
"DayOfWeek",
"DOT_ID_Reporting_Airline",
"OriginCityMarketID",
"DestCityMarketID",
"DepTime",
"DepDelay",
"DepDel15",
"ArrDel15",
"ArrDelay",
"AirTime",
"Distance",
],
y_label="ArrDel15",
)
X = dataset[dataset.columns.difference(["ArrDelay", y_label])]
y = dataset[y_label]
del dataset
print("\n---->>>> Training using GPUs <<<<----\n")
# ----------------------------------------------------------------------------------------------------
# cross-validation folds
# ----------------------------------------------------------------------------------------------------
accuracy_per_fold = []
train_time_per_fold = []
infer_time_per_fold = []
trained_model = []
global_best_test_accuracy = 0
model_params = {
"n_estimators": n_estimators,
"max_depth": max_depth,
"max_features": max_features,
"n_bins": n_bins,
}
# optional cross-validation w/ model_params['n_train_folds'] > 1
for i_train_fold in range(cv_folds):
print(f"\n CV fold { i_train_fold } of { cv_folds }\n")
# split data
X_train, X_test, y_train, y_test, _ = azure_ml.split_data(
X, y, random_state=i_train_fold
)
# train model
trained_model, training_time = azure_ml.train_model(
X_train, y_train, model_params
)
train_time_per_fold.append(round(training_time, 4))
# evaluate perf
test_accuracy, infer_time = azure_ml.evaluate_test_perf(
trained_model, X_test, y_test
)
accuracy_per_fold.append(round(test_accuracy, 4))
infer_time_per_fold.append(round(infer_time, 4))
# update best model [ assumes maximization of perf metric ]
if test_accuracy > global_best_test_accuracy:
global_best_test_accuracy = test_accuracy
mlflow.log_metric(
"Total training inference time", np.float(training_time + infer_time)
)
mlflow.log_metric("Accuracy", np.float(global_best_test_accuracy))
print("\n Accuracy :", global_best_test_accuracy)
print("\n accuracy per fold :", accuracy_per_fold)
print("\n train-time per fold :", train_time_per_fold)
print("\n train-time all folds :", sum(train_time_per_fold))
print("\n infer-time per fold :", infer_time_per_fold)
print("\n infer-time all folds :", sum(infer_time_per_fold))
if __name__ == "__main__":
with PerfTimer() as total_script_time:
main()
print(f"Total runtime: {total_script_time.duration:.2f}")
mlflow.log_metric("Total runtime", np.float(total_script_time.duration))
print("\n Exiting script")
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/rapids-azureml-hpo/rapids_csp_azure.py
|
#
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#
import json
import logging
import pprint
import time
import cudf
import cuml
import dask
import dask_cudf
import numpy as np
import pandas as pd
import pyarrow.orc as pyarrow_orc
import sklearn
import xgboost
from cuml.dask.common import utils as dask_utils
from cuml.metrics.accuracy import accuracy_score
from cuml.model_selection import train_test_split as cuml_train_test_split
from dask.distributed import Client
from dask_cuda import LocalCUDACluster
from dask_ml.model_selection import train_test_split as dask_train_test_split
from sklearn.model_selection import train_test_split as sklearn_train_test_split
default_azureml_paths = {
"train_script": "./train_script",
"train_data": "./data_airline",
"output": "./output",
}
class RapidsCloudML:
def __init__(
self,
cloud_type="Azure",
model_type="RandomForest",
data_type="Parquet",
compute_type="single-GPU",
verbose_estimator=False,
CSP_paths=default_azureml_paths,
):
self.CSP_paths = CSP_paths
self.cloud_type = cloud_type
self.model_type = model_type
self.data_type = data_type
self.compute_type = compute_type
self.verbose_estimator = verbose_estimator
self.log_to_file(
f"\n> RapidsCloudML\n\tCompute, Data, Model, Cloud types "
f"{self.compute_type, self.data_type, self.model_type, self.cloud_type}"
)
# Setting up client for multi-GPU option
if "multi" in self.compute_type:
self.log_to_file("\n\tMulti-GPU selected")
# This will use all GPUs on the local host by default
cluster = LocalCUDACluster(threads_per_worker=1)
self.client = Client(cluster)
# Query the client for all connected workers
self.workers = self.client.has_what().keys()
self.n_workers = len(self.workers)
self.log_to_file(f"\n\tClient information {self.client}")
def load_hyperparams(self, model_name="XGBoost"):
"""
Selecting model paramters based on the model we select for execution.
Checks if there is a config file present in the path self.CSP_paths['hyperparams'] with
the parameters for the experiment. If not present, it returns the default parameters.
Parameters
----------
model_name : string
Selects which model to set the parameters for. Takes either 'XGBoost' or 'RandomForest'.
Returns
----------
model_params : dict
Loaded model parameters (dict)
"""
self.log_to_file("\n> Loading Hyperparameters")
# Default parameters of the models
if self.model_type == "XGBoost":
# https://xgboost.readthedocs.io/en/latest/parameter.html
model_params = {
"max_depth": 6,
"num_boost_round": 100,
"learning_rate": 0.3,
"gamma": 0.0,
"lambda": 1.0,
"alpha": 0.0,
"objective": "binary:logistic",
"random_state": 0,
}
elif self.model_type == "RandomForest":
# https://docs.rapids.ai/api/cuml/stable/ -> cuml.ensemble.RandomForestClassifier
model_params = {
"n_estimators": 10,
"max_depth": 10,
"n_bins": 16,
"max_features": 1.0,
"seed": 0,
}
hyperparameters = {}
try:
with open(self.CSP_paths["hyperparams"]) as file_handle:
hyperparameters = json.load(file_handle)
for key, value in hyperparameters.items():
model_params[key] = value
pprint.pprint(model_params)
return model_params
except Exception as error:
self.log_to_file(str(error))
return
def load_data(
self, filename="dataset.orc", col_labels=None, y_label="ArrDelayBinary"
):
"""
Loading the data into the object from the filename and based on the columns that we are
interested in. Also, generates y_label from 'ArrDelay' column to convert this into a binary
classification problem.
Parameters
----------
filename : string
the path of the dataset to be loaded
col_labels : list of strings
The input columns that we are interested in. None selects all the columns
y_label : string
The column to perform the prediction task in.
Returns
----------
dataset : dataframe (Pandas, cudf or dask-cudf)
Ingested dataset in the format of a dataframe
col_labels : list of strings
The input columns selected
y_label : string
The generated y_label name for binary classification
duration : float
The time it took to execute the function
"""
target_filename = filename
self.log_to_file(f"\n> Loading dataset from {target_filename}")
with PerfTimer() as ingestion_timer:
if "CPU" in self.compute_type:
# CPU Reading options
self.log_to_file("\n\tCPU read")
if self.data_type == "ORC":
with open(target_filename, mode="rb") as file:
dataset = pyarrow_orc.ORCFile(file).read().to_pandas()
elif self.data_type == "CSV":
dataset = pd.read_csv(target_filename, names=col_labels)
elif self.data_type == "Parquet":
if "single" in self.compute_type:
dataset = pd.read_parquet(target_filename)
elif "multi" in self.compute_type:
self.log_to_file("\n\tReading using dask dataframe")
dataset = dask.dataframe.read_parquet(
target_filename, columns=col_labels
)
elif "GPU" in self.compute_type:
# GPU Reading Option
self.log_to_file("\n\tGPU read")
if self.data_type == "ORC":
dataset = cudf.read_orc(target_filename)
elif self.data_type == "CSV":
dataset = cudf.read_csv(target_filename, names=col_labels)
elif self.data_type == "Parquet":
if "single" in self.compute_type:
dataset = cudf.read_parquet(target_filename)
elif "multi" in self.compute_type:
self.log_to_file("\n\tReading using dask_cudf")
dataset = dask_cudf.read_parquet(
target_filename, columns=col_labels
)
# cast all columns to float32
for col in dataset.columns:
dataset[col] = dataset[col].astype(np.float32) # needed for random forest
# Adding y_label column if it is not present
if y_label not in dataset.columns:
dataset[y_label] = 1.0 * (dataset["ArrDelay"] > 10)
dataset[y_label] = dataset[y_label].astype(np.int32) # Needed for cuml RF
dataset = dataset.fillna(0.0) # Filling the null values. Needed for dask-cudf
self.log_to_file(f"\n\tIngestion completed in {ingestion_timer.duration}")
self.log_to_file(
f"\n\tDataset descriptors: {dataset.shape}\n\t{dataset.dtypes}"
)
return dataset, col_labels, y_label, ingestion_timer.duration
def split_data(
self, dataset, y_label, train_size=0.8, random_state=0, shuffle=True
):
"""
Splitting data into train and test split, has appropriate imports for different compute modes.
CPU compute - Uses sklearn, we manually filter y_label column in the split call
GPU Compute - Single GPU uses cuml and multi GPU uses dask, both split y_label internally.
Parameters
----------
dataset : dataframe
The dataframe on which we wish to perform the split
y_label : string
The name of the column (not the series itself)
train_size : float
The size for the split. Takes values between 0 to 1.
random_state : int
Useful for running reproducible splits.
shuffle : binary
Specifies if the data must be shuffled before splitting.
Returns
----------
X_train : dataframe
The data to be used for training. Has same type as input dataset.
X_test : dataframe
The data to be used for testing. Has same type as input dataset.
y_train : dataframe
The label to be used for training. Has same type as input dataset.
y_test : dataframe
The label to be used for testing. Has same type as input dataset.
duration : float
The time it took to perform the split
"""
self.log_to_file("\n> Splitting train and test data")
time.perf_counter()
with PerfTimer() as split_timer:
if "CPU" in self.compute_type:
X_train, X_test, y_train, y_test = sklearn_train_test_split(
dataset.loc[:, dataset.columns != y_label],
dataset[y_label],
train_size=train_size,
shuffle=shuffle,
random_state=random_state,
)
elif "GPU" in self.compute_type:
if "single" in self.compute_type:
X_train, X_test, y_train, y_test = cuml_train_test_split(
X=dataset,
y=y_label,
train_size=train_size,
shuffle=shuffle,
random_state=random_state,
)
elif "multi" in self.compute_type:
X_train, X_test, y_train, y_test = dask_train_test_split(
dataset,
y_label,
train_size=train_size,
shuffle=False, # shuffle not available for dask_cudf yet
random_state=random_state,
)
self.log_to_file(f"\n\tX_train shape and type{X_train.shape} {type(X_train)}")
self.log_to_file(f"\n\tSplit completed in {split_timer.duration}")
return X_train, X_test, y_train, y_test, split_timer.duration
def train_model(self, X_train, y_train, model_params):
"""
Trains a model with the model_params specified by calling fit_xgboost or
fit_random_forest depending on the model_type.
Parameters
----------
X_train : dataframe
The data for traning
y_train : dataframe
The label to be used for training.
model_params : dict
The model params to use for this training
Returns
----------
trained_model : The object of the trained model either of XGBoost or RandomForest
training_time : float
The time it took to train the model
"""
self.log_to_file(f"\n> Training {self.model_type} estimator w/ hyper-params")
training_time = 0
try:
if self.model_type == "XGBoost":
trained_model, training_time = self.fit_xgboost(
X_train, y_train, model_params
)
elif self.model_type == "RandomForest":
trained_model, training_time = self.fit_random_forest(
X_train, y_train, model_params
)
except Exception as error:
self.log_to_file("\n\n!error during model training: " + str(error))
self.log_to_file(f"\n\tFinished training in {training_time:.4f} s")
return trained_model, training_time
def fit_xgboost(self, X_train, y_train, model_params):
"""
Trains a XGBoost model on X_train and y_train with model_params
Parameters and Objects returned are same as trained_model
"""
if "GPU" in self.compute_type:
model_params.update({"tree_method": "gpu_hist"})
else:
model_params.update({"tree_method": "hist"})
with PerfTimer() as train_timer:
if "single" in self.compute_type:
train_DMatrix = xgboost.DMatrix(data=X_train, label=y_train)
trained_model = xgboost.train(
dtrain=train_DMatrix,
params=model_params,
num_boost_round=model_params["num_boost_round"],
)
elif "multi" in self.compute_type:
self.log_to_file("\n\tTraining multi-GPU XGBoost")
train_DMatrix = xgboost.dask.DaskDMatrix(
self.client, data=X_train, label=y_train
)
trained_model = xgboost.dask.train(
self.client,
dtrain=train_DMatrix,
params=model_params,
num_boost_round=model_params["num_boost_round"],
)
return trained_model, train_timer.duration
def fit_random_forest(self, X_train, y_train, model_params):
"""
Trains a RandomForest model on X_train and y_train with model_params.
Depending on compute_type, estimators from appropriate packages are used.
CPU - sklearn
Single-GPU - cuml
multi_gpu - cuml.dask
Parameters and Objects returned are same as trained_model
"""
if "CPU" in self.compute_type:
rf_model = sklearn.ensemble.RandomForestClassifier(
n_estimators=model_params["n_estimators"],
max_depth=model_params["max_depth"],
max_features=model_params["max_features"],
n_jobs=int(self.n_workers),
verbose=self.verbose_estimator,
)
elif "GPU" in self.compute_type:
if "single" in self.compute_type:
rf_model = cuml.ensemble.RandomForestClassifier(
n_estimators=model_params["n_estimators"],
max_depth=model_params["max_depth"],
n_bins=model_params["n_bins"],
max_features=model_params["max_features"],
verbose=self.verbose_estimator,
)
elif "multi" in self.compute_type:
self.log_to_file("\n\tFitting multi-GPU daskRF")
X_train, y_train = dask_utils.persist_across_workers(
self.client,
[X_train.fillna(0.0), y_train.fillna(0.0)],
workers=self.workers,
)
rf_model = cuml.dask.ensemble.RandomForestClassifier(
n_estimators=model_params["n_estimators"],
max_depth=model_params["max_depth"],
n_bins=model_params["n_bins"],
max_features=model_params["max_features"],
verbose=self.verbose_estimator,
)
with PerfTimer() as train_timer:
try:
trained_model = rf_model.fit(X_train, y_train)
except Exception as error:
self.log_to_file("\n\n! Error during fit " + str(error))
return trained_model, train_timer.duration
def evaluate_test_perf(self, trained_model, X_test, y_test, threshold=0.5):
"""
Evaluates the model performance on the inference set. For XGBoost we need
to generate a DMatrix and then we can evaluate the model.
For Random Forest, in single GPU case, we can just call .score function.
And multi-GPU Random Forest needs to predict on the model and then compute
the accuracy score.
Parameters
----------
trained_model : The object of the trained model either of XGBoost or RandomForest
X_test : dataframe
The data for testing
y_test : dataframe
The label to be used for testing.
Returns
----------
test_accuracy : float
The accuracy achieved on test set
duration : float
The time it took to evaluate the model
"""
self.log_to_file("\n> Inferencing on test set")
test_accuracy = None
with PerfTimer() as inference_timer:
try:
if self.model_type == "XGBoost":
if "multi" in self.compute_type:
test_DMatrix = xgboost.dask.DaskDMatrix(
self.client, data=X_test, label=y_test
)
xgb_pred = xgboost.dask.predict(
self.client, trained_model, test_DMatrix
).compute()
xgb_pred = (xgb_pred > threshold) * 1.0
test_accuracy = accuracy_score(y_test.compute(), xgb_pred)
elif "single" in self.compute_type:
test_DMatrix = xgboost.DMatrix(data=X_test, label=y_test)
xgb_pred = trained_model.predict(test_DMatrix)
xgb_pred = (xgb_pred > threshold) * 1.0
test_accuracy = accuracy_score(y_test, xgb_pred)
elif self.model_type == "RandomForest":
if "multi" in self.compute_type:
cuml_pred = trained_model.predict(X_test).compute()
self.log_to_file("\n\tPrediction complete")
test_accuracy = accuracy_score(
y_test.compute(), cuml_pred, convert_dtype=True
)
elif "single" in self.compute_type:
test_accuracy = trained_model.score(
X_test, y_test.astype("int32")
)
except Exception as error:
self.log_to_file("\n\n!error during inference: " + str(error))
self.log_to_file(f"\n\tFinished inference in {inference_timer.duration:.4f} s")
self.log_to_file(f"\n\tTest-accuracy: {test_accuracy}")
return test_accuracy, inference_timer.duration
def set_up_logging(self):
"""
Function to set up logging for the object.
"""
logging_path = self.CSP_paths["output"] + "/log.txt"
logging.basicConfig(filename=logging_path, level=logging.INFO)
def log_to_file(self, text):
"""
Logs the text that comes in as input.
"""
logging.info(text)
print(text)
# perf_counter = highest available timer resolution
class PerfTimer:
def __init__(self):
self.start = None
self.duration = None
def __enter__(self):
self.start = time.perf_counter()
return self
def __exit__(self, *args):
self.duration = time.perf_counter() - self.start
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/rapids-azureml-hpo/Dockerfile
|
# Use rapids base image v23.02 with the necessary dependencies
FROM rapidsai/rapidsai:23.02-cuda11.8-runtime-ubuntu22.04-py3.10
# Update package information and install required packages
RUN apt-get update && \
apt-get install -y --no-install-recommends build-essential fuse && \
rm -rf /var/lib/apt/lists/*
# Activate rapids conda environment
RUN /bin/bash -c "source activate rapids && pip install azureml-mlflow azureml-dataprep"
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/xgboost-rf-gpu-cpu-benchmark/hpo.py
|
import argparse
import gc
import glob
import os
import time
from functools import partial
import dask
import optuna
import pandas as pd
import xgboost as xgb
from dask.distributed import Client, LocalCluster, wait
from dask_cuda import LocalCUDACluster
from sklearn.ensemble import RandomForestClassifier as RF_cpu
from sklearn.metrics import accuracy_score as accuracy_score_cpu
from sklearn.model_selection import train_test_split
n_cv_folds = 5
label_column = "ArrDel15"
feature_columns = [
"Year",
"Quarter",
"Month",
"DayOfWeek",
"Flight_Number_Reporting_Airline",
"DOT_ID_Reporting_Airline",
"OriginCityMarketID",
"DestCityMarketID",
"DepTime",
"DepDelay",
"DepDel15",
"ArrDel15",
"AirTime",
"Distance",
]
def ingest_data(target):
if target == "gpu":
import cudf
dataset = cudf.read_parquet(
glob.glob("./data/*.parquet"),
columns=feature_columns,
)
else:
dataset = pd.read_parquet(
glob.glob("./data/*.parquet"),
columns=feature_columns,
)
return dataset
def preprocess_data(dataset, *, i_fold):
dataset.dropna(inplace=True)
train, test = train_test_split(dataset, random_state=i_fold, shuffle=True)
X_train, y_train = train.drop(label_column, axis=1), train[label_column]
X_test, y_test = test.drop(label_column, axis=1), test[label_column]
X_train, y_train = X_train.astype("float32"), y_train.astype("int32")
X_test, y_test = X_test.astype("float32"), y_test.astype("int32")
return X_train, y_train, X_test, y_test
def train_xgboost(trial, *, target, reseed_rng, threads_per_worker=None):
reseed_rng() # Work around a bug in DaskStorage: https://github.com/optuna/optuna/issues/4859
dataset = ingest_data(target)
params = {
"max_depth": trial.suggest_int("max_depth", 4, 8),
"learning_rate": trial.suggest_float("learning_rate", 0.001, 0.1, log=True),
"min_child_weight": trial.suggest_float(
"min_child_weight", 0.1, 10.0, log=True
),
"reg_alpha": trial.suggest_float("reg_alpha", 0.0001, 100, log=True),
"reg_lambda": trial.suggest_float("reg_lambda", 0.0001, 100, log=True),
"verbosity": 0,
"objective": "binary:logistic",
}
num_boost_round = trial.suggest_int("num_boost_round", 100, 500, step=10)
cv_fold_scores = []
for i_fold in range(n_cv_folds):
X_train, y_train, X_test, y_test = preprocess_data(dataset, i_fold=i_fold)
dtrain = xgb.QuantileDMatrix(X_train, label=y_train)
dtest = xgb.QuantileDMatrix(X_test, ref=dtrain)
if target == "gpu":
from cuml.metrics import accuracy_score as accuracy_score_gpu
params["tree_method"] = "gpu_hist"
accuracy_score_func = accuracy_score_gpu
else:
params["tree_method"] = "hist"
params["nthread"] = threads_per_worker
accuracy_score_func = accuracy_score_cpu
trained_model = xgb.train(params, dtrain, num_boost_round=num_boost_round)
pred = trained_model.predict(dtest) > 0.5
pred = pred.astype("int32")
score = accuracy_score_func(y_test, pred)
cv_fold_scores.append(score)
del dtrain, dtest, X_train, y_train, X_test, y_test, trained_model
gc.collect()
final_score = sum(cv_fold_scores) / len(cv_fold_scores)
del dataset
gc.collect()
return final_score
def train_randomforest(trial, *, target, reseed_rng, threads_per_worker=None):
reseed_rng() # Work around a bug in DaskStorage: https://github.com/optuna/optuna/issues/4859
dataset = ingest_data(target)
params = {
"max_depth": trial.suggest_int("max_depth", 5, 15),
"max_features": trial.suggest_float("max_features", 0.1, 1.0),
"n_estimators": trial.suggest_int("n_estimators", 50, 100, step=5),
"min_samples_split": trial.suggest_int("min_samples_split", 2, 1000, log=True),
}
cv_fold_scores = []
for i_fold in range(n_cv_folds):
X_train, y_train, X_test, y_test = preprocess_data(dataset, i_fold=i_fold)
if target == "gpu":
from cuml.ensemble import RandomForestClassifier as RF_gpu
from cuml.metrics import accuracy_score as accuracy_score_gpu
params["n_streams"] = 4
params["n_bins"] = 256
params["split_criterion"] = trial.suggest_categorical(
"split_criterion", ["gini", "entropy"]
)
trained_model = RF_gpu(**params)
accuracy_score_func = accuracy_score_gpu
else:
params["n_jobs"] = threads_per_worker
params["criterion"] = trial.suggest_categorical(
"criterion", ["gini", "entropy"]
)
trained_model = RF_cpu(**params)
accuracy_score_func = accuracy_score_cpu
trained_model.fit(X_train, y_train)
pred = trained_model.predict(X_test)
score = accuracy_score_func(y_test, pred)
cv_fold_scores.append(score)
del X_train, y_train, X_test, y_test, trained_model
gc.collect()
final_score = sum(cv_fold_scores) / len(cv_fold_scores)
del dataset
gc.collect()
return final_score
def main(args):
tstart = time.perf_counter()
if args.target == "gpu":
cluster = LocalCUDACluster()
else:
dask.config.set({"distributed.worker.daemon": False})
cluster = LocalCluster(
n_workers=os.cpu_count() // args.threads_per_worker,
threads_per_worker=args.threads_per_worker,
)
n_workers = len(cluster.workers)
n_trials = 100
with Client(cluster) as client:
dask_storage = optuna.integration.DaskStorage(storage=None, client=client)
study = optuna.create_study(
direction="maximize",
sampler=optuna.samplers.RandomSampler(),
storage=dask_storage,
)
futures = []
if args.model_type == "XGBoost":
if args.target == "gpu":
objective_func = partial(
train_xgboost,
target=args.target,
reseed_rng=study.sampler.reseed_rng,
)
else:
objective_func = partial(
train_xgboost,
target=args.target,
reseed_rng=study.sampler.reseed_rng,
threads_per_worker=args.threads_per_worker,
)
else:
if args.target == "gpu":
objective_func = partial(
train_randomforest,
target=args.target,
reseed_rng=study.sampler.reseed_rng,
)
else:
objective_func = partial(
train_randomforest,
target=args.target,
reseed_rng=study.sampler.reseed_rng,
threads_per_worker=args.threads_per_worker,
)
for i in range(0, n_trials, n_workers * 2):
iter_range = (i, min([i + n_workers * 2, n_trials]))
futures = [
client.submit(
study.optimize,
objective_func,
n_trials=1,
pure=False,
)
for _ in range(*iter_range)
]
print(
f"Testing hyperparameter combinations {iter_range[0]}..{iter_range[1]}"
)
_ = wait(futures)
for fut in futures:
_ = fut.result() # Ensure that the training job was successful
tnow = time.perf_counter()
print(
f"Best cross-validation metric: {study.best_value}, Time elapsed = {tnow - tstart}"
)
tend = time.perf_counter()
print(f"Time elapsed: {tend - tstart} sec")
cluster.close()
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--model-type", type=str, required=True, choices=["XGBoost", "RandomForest"]
)
parser.add_argument("--target", required=True, choices=["gpu", "cpu"])
parser.add_argument(
"--threads_per_worker",
required=False,
type=int,
default=8,
help="Number of threads per worker process. Only applicable for CPU target",
)
args = parser.parse_args()
main(args)
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/xgboost-rf-gpu-cpu-benchmark/Dockerfile
|
FROM rapidsai/base:23.10a-cuda12.0-py3.10
RUN mamba install -y -n base optuna
| 0 |
rapidsai_public_repos/deployment/source/examples
|
rapidsai_public_repos/deployment/source/examples/time-series-forecasting-with-hpo/notebook.ipynb
|
bucket_name = "<Put the name of the bucket here>"# Test if the bucket is accessible
import gcsfs
fs = gcsfs.GCSFileSystem()
fs.ls(f"{bucket_name}/")kaggle_username = "<Put your Kaggle username here>"
kaggle_api_key = "<Put your Kaggle API key here>"%env KAGGLE_USERNAME=$kaggle_username
%env KAGGLE_KEY=$kaggle_api_key
!kaggle competitions download -c m5-forecasting-accuracyimport zipfile
with zipfile.ZipFile("m5-forecasting-accuracy.zip", "r") as zf:
zf.extractall(path="./data")import cudf
import numpy as np
import gc
import pathlib
import gcsfs
def sizeof_fmt(num, suffix="B"):
for unit in ["", "Ki", "Mi", "Gi", "Ti", "Pi", "Ei", "Zi"]:
if abs(num) < 1024.0:
return f"{num:3.1f}{unit}{suffix}"
num /= 1024.0
return "%.1f%s%s" % (num, "Yi", suffix)
def report_dataframe_size(df, name):
print(
"{} takes up {} memory on GPU".format(
name, sizeof_fmt(grid_df.memory_usage(index=True).sum())
)
)TARGET = "sales" # Our main target
END_TRAIN = 1941 # Last day in train setraw_data_dir = pathlib.Path("./data/")train_df = cudf.read_csv(raw_data_dir / "sales_train_evaluation.csv")
prices_df = cudf.read_csv(raw_data_dir / "sell_prices.csv")
calendar_df = cudf.read_csv(raw_data_dir / "calendar.csv").rename(
columns={"d": "day_id"}
)train_dfprices_dfcalendar_dfindex_columns = ["id", "item_id", "dept_id", "cat_id", "store_id", "state_id"]
grid_df = cudf.melt(
train_df, id_vars=index_columns, var_name="day_id", value_name=TARGET
)
grid_dfadd_grid = cudf.DataFrame()
for i in range(1, 29):
temp_df = train_df[index_columns]
temp_df = temp_df.drop_duplicates()
temp_df["day_id"] = "d_" + str(END_TRAIN + i)
temp_df[TARGET] = np.nan # Sales amount at time (n + i) is unknown
add_grid = cudf.concat([add_grid, temp_df])
add_grid["day_id"] = add_grid["day_id"].astype(
"category"
) # The day_id column is categorical, after cudf.melt
grid_df = cudf.concat([grid_df, add_grid])
grid_df = grid_df.reset_index(drop=True)
grid_df["sales"] = grid_df["sales"].astype(
np.float32
) # Use float32 type for sales column, to conserve memory
grid_df# Use xdel magic to scrub extra references from Jupyter notebook
%xdel temp_df
%xdel add_grid
%xdel train_df
# Invoke the garbage collector explicitly to free up memory
gc.collect()report_dataframe_size(grid_df, "grid_df")grid_df.dtypesfor col in index_columns:
grid_df[col] = grid_df[col].astype("category")
gc.collect()
report_dataframe_size(grid_df, "grid_df")grid_df.dtypesprices_dfrelease_df = (
prices_df.groupby(["store_id", "item_id"])["wm_yr_wk"].agg("min").reset_index()
)
release_df.columns = ["store_id", "item_id", "release_week"]
release_dfgrid_df = grid_df.merge(release_df, on=["store_id", "item_id"], how="left")
grid_df = grid_df.sort_values(index_columns + ["day_id"]).reset_index(drop=True)
grid_dfdel release_df # No longer needed
gc.collect()report_dataframe_size(grid_df, "grid_df")grid_df = grid_df.merge(calendar_df[["wm_yr_wk", "day_id"]], on=["day_id"], how="left")
grid_dfreport_dataframe_size(grid_df, "grid_df")df = grid_df[grid_df["wm_yr_wk"] < grid_df["release_week"]]
dfassert (df["sales"] == 0).all()grid_df = grid_df[grid_df["wm_yr_wk"] >= grid_df["release_week"]].reset_index(drop=True)
grid_df["wm_yr_wk"] = grid_df["wm_yr_wk"].astype(
np.int32
) # Convert wm_yr_wk column to int32, to conserve memory
grid_dfreport_dataframe_size(grid_df, "grid_df")# Convert day_id to integers
grid_df["day_id_int"] = grid_df["day_id"].to_pandas().apply(lambda x: x[2:]).astype(int)
# Compute the total sales over the latest 28 days, per product item
last28 = grid_df[(grid_df["day_id_int"] >= 1914) & (grid_df["day_id_int"] < 1942)]
last28 = last28[["item_id", "wm_yr_wk", "sales"]].merge(
prices_df[["item_id", "wm_yr_wk", "sell_price"]], on=["item_id", "wm_yr_wk"]
)
last28["sales_usd"] = last28["sales"] * last28["sell_price"]
total_sales_usd = last28.groupby("item_id")[["sales_usd"]].agg(["sum"]).sort_index()
total_sales_usd.columns = total_sales_usd.columns.map("_".join)
total_sales_usdweights = total_sales_usd / total_sales_usd.sum()
weights = weights.rename(columns={"sales_usd_sum": "weights"})
weights# No longer needed
del grid_df["day_id_int"]# Highest price over all weeks
prices_df["price_max"] = prices_df.groupby(["store_id", "item_id"])[
"sell_price"
].transform("max")
# Lowest price over all weeks
prices_df["price_min"] = prices_df.groupby(["store_id", "item_id"])[
"sell_price"
].transform("min")
# Standard deviation of the price
prices_df["price_std"] = prices_df.groupby(["store_id", "item_id"])[
"sell_price"
].transform("std")
# Mean (average) price over all weeks
prices_df["price_mean"] = prices_df.groupby(["store_id", "item_id"])[
"sell_price"
].transform("mean")prices_df["price_norm"] = prices_df["sell_price"] / prices_df["price_max"]prices_df["price_nunique"] = prices_df.groupby(["store_id", "item_id"])[
"sell_price"
].transform("nunique")prices_df["item_nunique"] = prices_df.groupby(["store_id", "sell_price"])[
"item_id"
].transform("nunique")prices_df# Add "month" and "year" columns to prices_df
week_to_month_map = calendar_df[["wm_yr_wk", "month", "year"]].drop_duplicates(
subset=["wm_yr_wk"]
)
prices_df = prices_df.merge(week_to_month_map, on=["wm_yr_wk"], how="left")
# Sort by wm_yr_wk. The rows will also be sorted in ascending months and years.
prices_df = prices_df.sort_values(["store_id", "item_id", "wm_yr_wk"])# Compare with the average price in the previous week
prices_df["price_momentum"] = prices_df["sell_price"] / prices_df.groupby(
["store_id", "item_id"]
)["sell_price"].shift(1)
# Compare with the average price in the previous month
prices_df["price_momentum_m"] = prices_df["sell_price"] / prices_df.groupby(
["store_id", "item_id", "month"]
)["sell_price"].transform("mean")
# Compare with the average price in the previous year
prices_df["price_momentum_y"] = prices_df["sell_price"] / prices_df.groupby(
["store_id", "item_id", "year"]
)["sell_price"].transform("mean")# Remove "month" and "year" columns, as we don't need them any more
del prices_df["month"], prices_df["year"]
# Convert float64 columns into float32 type to save memory
columns = [
"sell_price",
"price_max",
"price_min",
"price_std",
"price_mean",
"price_norm",
"price_momentum",
"price_momentum_m",
"price_momentum_y",
]
for col in columns:
prices_df[col] = prices_df[col].astype(np.float32)prices_df.dtypes# After merging price_df, keep columns id and day_id from grid_df and drop all other columns from grid_df
original_columns = list(grid_df)
grid_df_with_price = grid_df.copy()
grid_df_with_price = grid_df_with_price.merge(
prices_df, on=["store_id", "item_id", "wm_yr_wk"], how="left"
)
columns_to_keep = ["id", "day_id"] + [
col for col in list(grid_df_with_price) if col not in original_columns
]
grid_df_with_price = grid_df_with_price[["id", "day_id"] + columns_to_keep]
grid_df_with_price# Bring in the following columns from calendar_df into grid_df
grid_df_id_only = grid_df[["id", "day_id"]].copy()
icols = [
"date",
"day_id",
"event_name_1",
"event_type_1",
"event_name_2",
"event_type_2",
"snap_CA",
"snap_TX",
"snap_WI",
]
grid_df_with_calendar = grid_df_id_only.merge(
calendar_df[icols], on=["day_id"], how="left"
)
grid_df_with_calendar# Convert columns into categorical type to save memory
for col in [
"event_name_1",
"event_type_1",
"event_name_2",
"event_type_2",
"snap_CA",
"snap_TX",
"snap_WI",
]:
grid_df_with_calendar[col] = grid_df_with_calendar[col].astype("category")
# Convert "date" column into timestamp type
grid_df_with_calendar["date"] = cudf.to_datetime(grid_df_with_calendar["date"])import cupy as cp
grid_df_with_calendar["tm_d"] = grid_df_with_calendar["date"].dt.day.astype(np.int8)
grid_df_with_calendar["tm_w"] = (
grid_df_with_calendar["date"].dt.isocalendar().week.astype(np.int8)
)
grid_df_with_calendar["tm_m"] = grid_df_with_calendar["date"].dt.month.astype(np.int8)
grid_df_with_calendar["tm_y"] = grid_df_with_calendar["date"].dt.year
grid_df_with_calendar["tm_y"] = (
grid_df_with_calendar["tm_y"] - grid_df_with_calendar["tm_y"].min()
).astype(np.int8)
grid_df_with_calendar["tm_wm"] = cp.ceil(
grid_df_with_calendar["tm_d"].to_cupy() / 7
).astype(
np.int8
) # which week in tje month?
grid_df_with_calendar["tm_dw"] = grid_df_with_calendar["date"].dt.dayofweek.astype(
np.int8
) # which day in the week?
grid_df_with_calendar["tm_w_end"] = (grid_df_with_calendar["tm_dw"] >= 5).astype(
np.int8
) # whether today is in the weekend
del grid_df_with_calendar["date"] # no longer needed
grid_df_with_calendardel grid_df_id_only # No longer needed
gc.collect()SHIFT_DAY = 28
LAG_DAYS = [col for col in range(SHIFT_DAY, SHIFT_DAY + 15)]
# Need to first ensure that rows in each time series are sorted by day_id
grid_df_lags = grid_df[["id", "day_id", "sales"]].copy()
grid_df_lags = grid_df_lags.sort_values(["id", "day_id"])
grid_df_lags = grid_df_lags.assign(
**{
f"sales_lag_{l}": grid_df_lags.groupby(["id"])["sales"].shift(l)
for l in LAG_DAYS
}
)grid_df_lags# Shift by 28 days and apply windows of various sizes
print(f"Shift size: {SHIFT_DAY}")
for i in [7, 14, 30, 60, 180]:
print(f" Window size: {i}")
grid_df_lags[f"rolling_mean_{i}"] = (
grid_df_lags.groupby(["id"])["sales"]
.shift(SHIFT_DAY)
.rolling(i)
.mean()
.astype(np.float32)
)
grid_df_lags[f"rolling_std_{i}"] = (
grid_df_lags.groupby(["id"])["sales"]
.shift(SHIFT_DAY)
.rolling(i)
.std()
.astype(np.float32)
)grid_df_lags.columnsgrid_df_lags.dtypesgrid_df_lagsicols = [["store_id", "dept_id"], ["item_id", "state_id"]]
new_columns = []
grid_df_target_enc = grid_df[
["id", "day_id", "item_id", "state_id", "store_id", "dept_id", "sales"]
].copy()
grid_df_target_enc["sales"].fillna(value=0, inplace=True)
for col in icols:
print(f"Encoding columns {col}")
col_name = "_" + "_".join(col) + "_"
grid_df_target_enc["enc" + col_name + "mean"] = (
grid_df_target_enc.groupby(col)["sales"].transform("mean").astype(np.float32)
)
grid_df_target_enc["enc" + col_name + "std"] = (
grid_df_target_enc.groupby(col)["sales"].transform("std").astype(np.float32)
)
new_columns.extend(["enc" + col_name + "mean", "enc" + col_name + "std"])grid_df_target_enc = grid_df_target_enc[["id", "day_id"] + new_columns]
grid_df_target_encgrid_df_target_enc.dtypessegmented_data_dir = pathlib.Path("./segmented_data/")
segmented_data_dir.mkdir(exist_ok=True)
STORES = [
"CA_1",
"CA_2",
"CA_3",
"CA_4",
"TX_1",
"TX_2",
"TX_3",
"WI_1",
"WI_2",
"WI_3",
]
DEPTS = [
"HOBBIES_1",
"HOBBIES_2",
"HOUSEHOLD_1",
"HOUSEHOLD_2",
"FOODS_1",
"FOODS_2",
"FOODS_3",
]
grid2_colnm = [
"sell_price",
"price_max",
"price_min",
"price_std",
"price_mean",
"price_norm",
"price_nunique",
"item_nunique",
"price_momentum",
"price_momentum_m",
"price_momentum_y",
]
grid3_colnm = [
"event_name_1",
"event_type_1",
"event_name_2",
"event_type_2",
"snap_CA",
"snap_TX",
"snap_WI",
"tm_d",
"tm_w",
"tm_m",
"tm_y",
"tm_wm",
"tm_dw",
"tm_w_end",
]
lag_colnm = [
"sales_lag_28",
"sales_lag_29",
"sales_lag_30",
"sales_lag_31",
"sales_lag_32",
"sales_lag_33",
"sales_lag_34",
"sales_lag_35",
"sales_lag_36",
"sales_lag_37",
"sales_lag_38",
"sales_lag_39",
"sales_lag_40",
"sales_lag_41",
"sales_lag_42",
"rolling_mean_7",
"rolling_std_7",
"rolling_mean_14",
"rolling_std_14",
"rolling_mean_30",
"rolling_std_30",
"rolling_mean_60",
"rolling_std_60",
"rolling_mean_180",
"rolling_std_180",
]
target_enc_colnm = [
"enc_store_id_dept_id_mean",
"enc_store_id_dept_id_std",
"enc_item_id_state_id_mean",
"enc_item_id_state_id_std",
]def prepare_data(store, dept=None):
"""
Filter and clean data according to stores and product departments
Parameters
----------
store: Filter data by retaining rows whose store_id matches this parameter.
dept: Filter data by retaining rows whose dept_id matches this parameter.
This parameter can be set to None to indicate that we shouldn't filter by dept_id.
"""
if store is None:
raise ValueError(f"store parameter must not be None")
if dept is None:
grid1 = grid_df[grid_df["store_id"] == store]
else:
grid1 = grid_df[
(grid_df["store_id"] == store) & (grid_df["dept_id"] == dept)
].drop(columns=["dept_id"])
grid1 = grid1.drop(columns=["release_week", "wm_yr_wk", "store_id", "state_id"])
grid2 = grid_df_with_price[["id", "day_id"] + grid2_colnm]
grid_combined = grid1.merge(grid2, on=["id", "day_id"], how="left")
del grid1, grid2
grid3 = grid_df_with_calendar[["id", "day_id"] + grid3_colnm]
grid_combined = grid_combined.merge(grid3, on=["id", "day_id"], how="left")
del grid3
lag_df = grid_df_lags[["id", "day_id"] + lag_colnm]
grid_combined = grid_combined.merge(lag_df, on=["id", "day_id"], how="left")
del lag_df
target_enc_df = grid_df_target_enc[["id", "day_id"] + target_enc_colnm]
grid_combined = grid_combined.merge(target_enc_df, on=["id", "day_id"], how="left")
del target_enc_df
gc.collect()
grid_combined = grid_combined.drop(columns=["id"])
grid_combined["day_id"] = (
grid_combined["day_id"]
.to_pandas()
.astype("str")
.apply(lambda x: x[2:])
.astype(np.int16)
)
return grid_combined# First save the segment to the disk
for store in STORES:
print(f"Processing store {store}...")
segment_df = prepare_data(store=store)
segment_df.to_pandas().to_pickle(
segmented_data_dir / f"combined_df_store_{store}.pkl"
)
del segment_df
gc.collect()
for store in STORES:
for dept in DEPTS:
print(f"Processing (store {store}, department {dept})...")
segment_df = prepare_data(store=store, dept=dept)
segment_df.to_pandas().to_pickle(
segmented_data_dir / f"combined_df_store_{store}_dept_{dept}.pkl"
)
del segment_df
gc.collect()# Then copy the segment to Cloud Storage
fs = gcsfs.GCSFileSystem()
for e in segmented_data_dir.glob("*.pkl"):
print(f"Uploading {e}...")
basename = e.name
fs.put_file(e, f"{bucket_name}/{basename}")# Also upload the product weights
fs = gcsfs.GCSFileSystem()
weights.to_pandas().to_pickle("product_weights.pkl")
fs.put_file("product_weights.pkl", f"{bucket_name}/product_weights.pkl")import cudf
import gcsfs
import xgboost as xgb
import pandas as pd
import numpy as np
import optuna
import gc
import time
import pickle
import copy
import json
import matplotlib.pyplot as plt
from matplotlib.patches import Patch
import matplotlib
from dask.distributed import wait
from dask_kubernetes.operator import KubeCluster
from dask.distributed import Client# Choose the same RAPIDS image you used for launching the notebook session
rapids_image = "rapidsai/notebooks:23.10a-cuda12.0-py3.10"
# Use the number of worker nodes in your Kubernetes cluster.
n_workers = 2
# Bucket that contains the processed data pickles
bucket_name = "<Put the name of the bucket here>"
bucket_name = "phcho-m5-competition-hpo-example"
# List of stores and product departments
STORES = [
"CA_1",
"CA_2",
"CA_3",
"CA_4",
"TX_1",
"TX_2",
"TX_3",
"WI_1",
"WI_2",
"WI_3",
]
DEPTS = [
"HOBBIES_1",
"HOBBIES_2",
"HOUSEHOLD_1",
"HOUSEHOLD_2",
"FOODS_1",
"FOODS_2",
"FOODS_3",
]# Cross-validation folds and held-out test set (in time dimension)
# The held-out test set is used for final evaluation
cv_folds = [ # (train_set, validation_set)
([0, 1114], [1114, 1314]),
([0, 1314], [1314, 1514]),
([0, 1514], [1514, 1714]),
([0, 1714], [1714, 1914]),
]
n_folds = len(cv_folds)
holdout = [1914, 1942]
time_horizon = 1942cv_cmap = matplotlib.colormaps["cividis"]
plt.figure(figsize=(8, 3))
for i, (train_mask, valid_mask) in enumerate(cv_folds):
idx = np.array([np.nan] * time_horizon)
idx[np.arange(*train_mask)] = 1
idx[np.arange(*valid_mask)] = 0
plt.scatter(
range(time_horizon),
[i + 0.5] * time_horizon,
c=idx,
marker="_",
capstyle="butt",
s=1,
lw=20,
cmap=cv_cmap,
vmin=-1.5,
vmax=1.5,
)
idx = np.array([np.nan] * time_horizon)
idx[np.arange(*holdout)] = -1
plt.scatter(
range(time_horizon),
[n_folds + 0.5] * time_horizon,
c=idx,
marker="_",
capstyle="butt",
s=1,
lw=20,
cmap=cv_cmap,
vmin=-1.5,
vmax=1.5,
)
plt.xlabel("Time")
plt.yticks(
ticks=np.arange(n_folds + 1) + 0.5,
labels=[f"Fold {i}" for i in range(n_folds)] + ["Holdout"],
)
plt.ylim([len(cv_folds) + 1.2, -0.2])
norm = matplotlib.colors.Normalize(vmin=-1.5, vmax=1.5)
plt.legend(
[
Patch(color=cv_cmap(norm(1))),
Patch(color=cv_cmap(norm(0))),
Patch(color=cv_cmap(norm(-1))),
],
["Training set", "Validation set", "Held-out test set"],
ncol=3,
loc="best",
)
plt.tight_layout()cluster = KubeCluster(
name="rapids-dask",
image=rapids_image,
worker_command="dask-cuda-worker",
n_workers=n_workers,
resources={"limits": {"nvidia.com/gpu": "1"}},
env={"EXTRA_PIP_PACKAGES": "optuna gcsfs"},
)clusterclient = Client(cluster)
clientdef wrmsse(product_weights, df, pred_sales, train_mask, valid_mask):
"""Compute WRMSSE metric"""
df_train = df[(df["day_id"] >= train_mask[0]) & (df["day_id"] < train_mask[1])]
df_valid = df[(df["day_id"] >= valid_mask[0]) & (df["day_id"] < valid_mask[1])]
# Compute denominator: 1/(n-1) * sum( (y(t) - y(t-1))**2 )
diff = (
df_train.sort_values(["item_id", "day_id"])
.groupby(["item_id"])[["sales"]]
.diff(1)
)
x = (
df_train[["item_id", "day_id"]]
.join(diff, how="left")
.rename(columns={"sales": "diff"})
.sort_values(["item_id", "day_id"])
)
x["diff"] = x["diff"] ** 2
xx = x.groupby(["item_id"])[["diff"]].agg(["sum", "count"]).sort_index()
xx.columns = xx.columns.map("_".join)
xx["denominator"] = xx["diff_sum"] / xx["diff_count"]
t = xx.reset_index()
# Compute numerator: 1/h * sum( (y(t) - y_pred(t))**2 )
X_valid = df_valid.drop(columns=["item_id", "cat_id", "day_id", "sales"])
if "dept_id" in X_valid.columns:
X_valid = X_valid.drop(columns=["dept_id"])
df_pred = cudf.DataFrame(
{
"item_id": df_valid["item_id"].copy(),
"pred_sales": pred_sales,
"sales": df_valid["sales"].copy(),
}
)
df_pred["diff"] = (df_pred["sales"] - df_pred["pred_sales"]) ** 2
yy = df_pred.groupby(["item_id"])[["diff"]].agg(["sum", "count"]).sort_index()
yy.columns = yy.columns.map("_".join)
yy["numerator"] = yy["diff_sum"] / yy["diff_count"]
zz = yy[["numerator"]].join(xx[["denominator"]], how="left")
zz = zz.join(product_weights, how="left").sort_index()
# Filter out zero denominator.
# This can occur if the product was never on sale during the period in the training set
zz = zz[zz["denominator"] != 0]
zz["rmsse"] = np.sqrt(zz["numerator"] / zz["denominator"])
t = zz["rmsse"].multiply(zz["weights"])
return zz["rmsse"].multiply(zz["weights"]).sum()def objective(trial):
fs = gcsfs.GCSFileSystem()
with fs.open(f"{bucket_name}/product_weights.pkl", "rb") as f:
product_weights = cudf.DataFrame(pd.read_pickle(f))
params = {
"n_estimators": 100,
"verbosity": 0,
"learning_rate": 0.01,
"objective": "reg:tweedie",
"tree_method": "gpu_hist",
"grow_policy": "depthwise",
"predictor": "gpu_predictor",
"enable_categorical": True,
"lambda": trial.suggest_float("lambda", 1e-8, 100.0, log=True),
"alpha": trial.suggest_float("alpha", 1e-8, 100.0, log=True),
"colsample_bytree": trial.suggest_float("colsample_bytree", 0.2, 1.0),
"max_depth": trial.suggest_int("max_depth", 2, 6, step=1),
"min_child_weight": trial.suggest_float(
"min_child_weight", 1e-8, 100, log=True
),
"gamma": trial.suggest_float("gamma", 1e-8, 1.0, log=True),
"tweedie_variance_power": trial.suggest_float("tweedie_variance_power", 1, 2),
}
scores = [[] for store in STORES]
for store_id, store in enumerate(STORES):
print(f"Processing store {store}...")
with fs.open(f"{bucket_name}/combined_df_store_{store}.pkl", "rb") as f:
df = cudf.DataFrame(pd.read_pickle(f))
for train_mask, valid_mask in cv_folds:
df_train = df[
(df["day_id"] >= train_mask[0]) & (df["day_id"] < train_mask[1])
]
df_valid = df[
(df["day_id"] >= valid_mask[0]) & (df["day_id"] < valid_mask[1])
]
X_train, y_train = (
df_train.drop(
columns=["item_id", "dept_id", "cat_id", "day_id", "sales"]
),
df_train["sales"],
)
X_valid = df_valid.drop(
columns=["item_id", "dept_id", "cat_id", "day_id", "sales"]
)
clf = xgb.XGBRegressor(**params)
clf.fit(X_train, y_train)
pred_sales = clf.predict(X_valid)
scores[store_id].append(
wrmsse(product_weights, df, pred_sales, train_mask, valid_mask)
)
del df_train, df_valid, X_train, y_train, clf
gc.collect()
del df
gc.collect()
# We can sum WRMSSE scores over data segments because data segments contain disjoint sets of time series
return np.array(scores).sum(axis=0).mean()##### Number of hyperparameter combinations to try in parallel
n_trials = 9 # Using a small n_trials so that the demo can finish quickly
# n_trials = 100
# Optimize in parallel on your Dask cluster
backend_storage = optuna.storages.InMemoryStorage()
dask_storage = optuna.integration.DaskStorage(storage=backend_storage, client=client)
study = optuna.create_study(
direction="minimize",
sampler=optuna.samplers.RandomSampler(seed=0),
storage=dask_storage,
)
futures = []
for i in range(0, n_trials, n_workers):
iter_range = (i, min([i + n_workers, n_trials]))
futures.append(
{
"range": iter_range,
"futures": [
client.submit(
# Work around bug https://github.com/optuna/optuna/issues/4859
lambda objective, n_trials: (
study.sampler.reseed_rng(),
study.optimize(objective, n_trials),
),
objective,
n_trials=1,
pure=False,
)
for _ in range(*iter_range)
],
}
)
tstart = time.perf_counter()
for partition in futures:
iter_range = partition["range"]
print(f"Testing hyperparameter combinations {iter_range[0]}..{iter_range[1]}")
_ = wait(partition["futures"])
for fut in partition["futures"]:
_ = fut.result() # Ensure that the training job was successful
tnow = time.perf_counter()
print(
f"Best cross-validation metric: {study.best_value}, Time elapsed = {tnow - tstart}"
)
tend = time.perf_counter()
print(f"Total time elapsed = {tend - tstart}")study.best_paramsstudy.best_trial# Make a deep copy to preserve the dictionary after deleting the Dask cluster
best_params = copy.deepcopy(study.best_params)
best_paramsfs = gcsfs.GCSFileSystem()
with fs.open(f"{bucket_name}/params.json", "w") as f:
json.dump(best_params, f)fs = gcsfs.GCSFileSystem()
with fs.open(f"{bucket_name}/params.json", "r") as f:
best_params = json.load(f)
with fs.open(f"{bucket_name}/product_weights.pkl", "rb") as f:
product_weights = cudf.DataFrame(pd.read_pickle(f))def final_train(best_params):
fs = gcsfs.GCSFileSystem()
params = {
"n_estimators": 100,
"verbosity": 0,
"learning_rate": 0.01,
"objective": "reg:tweedie",
"tree_method": "gpu_hist",
"grow_policy": "depthwise",
"predictor": "gpu_predictor",
"enable_categorical": True,
}
params.update(best_params)
model = {}
train_mask = [0, 1914]
for store in STORES:
print(f"Processing store {store}...")
with fs.open(f"{bucket_name}/combined_df_store_{store}.pkl", "rb") as f:
df = cudf.DataFrame(pd.read_pickle(f))
df_train = df[(df["day_id"] >= train_mask[0]) & (df["day_id"] < train_mask[1])]
X_train, y_train = (
df_train.drop(columns=["item_id", "dept_id", "cat_id", "day_id", "sales"]),
df_train["sales"],
)
clf = xgb.XGBRegressor(**params)
clf.fit(X_train, y_train)
model[store] = clf
del df
gc.collect()
return modelmodel = final_train(best_params)test_wrmsse = 0
for store in STORES:
with fs.open(f"{bucket_name}/combined_df_store_{store}.pkl", "rb") as f:
df = cudf.DataFrame(pd.read_pickle(f))
df_test = df[(df["day_id"] >= holdout[0]) & (df["day_id"] < holdout[1])]
X_test = df_test.drop(columns=["item_id", "dept_id", "cat_id", "day_id", "sales"])
pred_sales = model[store].predict(X_test)
test_wrmsse += wrmsse(
product_weights, df, pred_sales, train_mask=[0, 1914], valid_mask=holdout
)
print(f"WRMSSE metric on the held-out test set: {test_wrmsse}")# Save the model to the Cloud Storage
with fs.open(f"{bucket_name}/final_model.pkl", "wb") as f:
pickle.dump(model, f)def objective_alt(trial):
fs = gcsfs.GCSFileSystem()
with fs.open(f"{bucket_name}/product_weights.pkl", "rb") as f:
product_weights = cudf.DataFrame(pd.read_pickle(f))
params = {
"n_estimators": 100,
"verbosity": 0,
"learning_rate": 0.01,
"objective": "reg:tweedie",
"tree_method": "gpu_hist",
"grow_policy": "depthwise",
"predictor": "gpu_predictor",
"enable_categorical": True,
"lambda": trial.suggest_float("lambda", 1e-8, 100.0, log=True),
"alpha": trial.suggest_float("alpha", 1e-8, 100.0, log=True),
"colsample_bytree": trial.suggest_float("colsample_bytree", 0.2, 1.0),
"max_depth": trial.suggest_int("max_depth", 2, 6, step=1),
"min_child_weight": trial.suggest_float(
"min_child_weight", 1e-8, 100, log=True
),
"gamma": trial.suggest_float("gamma", 1e-8, 1.0, log=True),
"tweedie_variance_power": trial.suggest_float("tweedie_variance_power", 1, 2),
}
scores = [[] for i in range(len(STORES) * len(DEPTS))]
for store_id, store in enumerate(STORES):
for dept_id, dept in enumerate(DEPTS):
print(f"Processing store {store}, department {dept}...")
with fs.open(
f"{bucket_name}/combined_df_store_{store}_dept_{dept}.pkl", "rb"
) as f:
df = cudf.DataFrame(pd.read_pickle(f))
for train_mask, valid_mask in cv_folds:
df_train = df[
(df["day_id"] >= train_mask[0]) & (df["day_id"] < train_mask[1])
]
df_valid = df[
(df["day_id"] >= valid_mask[0]) & (df["day_id"] < valid_mask[1])
]
X_train, y_train = (
df_train.drop(columns=["item_id", "cat_id", "day_id", "sales"]),
df_train["sales"],
)
X_valid = df_valid.drop(
columns=["item_id", "cat_id", "day_id", "sales"]
)
clf = xgb.XGBRegressor(**params)
clf.fit(X_train, y_train)
sales_pred = clf.predict(X_valid)
scores[store_id * len(DEPTS) + dept_id].append(
wrmsse(product_weights, df, sales_pred, train_mask, valid_mask)
)
del df_train, df_valid, X_train, y_train, clf
gc.collect()
del df
gc.collect()
# We can sum WRMSSE scores over data segments because data segments contain disjoint sets of time series
return np.array(scores).sum(axis=0).mean()##### Number of hyperparameter combinations to try in parallel
n_trials = 9 # Using a small n_trials so that the demo can finish quickly
# n_trials = 100
# Optimize in parallel on your Dask cluster
backend_storage = optuna.storages.InMemoryStorage()
dask_storage = optuna.integration.DaskStorage(storage=backend_storage, client=client)
study = optuna.create_study(
direction="minimize",
sampler=optuna.samplers.RandomSampler(seed=0),
storage=dask_storage,
)
futures = []
for i in range(0, n_trials, n_workers):
iter_range = (i, min([i + n_workers, n_trials]))
futures.append(
{
"range": iter_range,
"futures": [
client.submit(
# Work around bug https://github.com/optuna/optuna/issues/4859
lambda objective, n_trials: (
study.sampler.reseed_rng(),
study.optimize(objective, n_trials),
),
objective_alt,
n_trials=1,
pure=False,
)
for _ in range(*iter_range)
],
}
)
tstart = time.perf_counter()
for partition in futures:
iter_range = partition["range"]
print(f"Testing hyperparameter combinations {iter_range[0]}..{iter_range[1]}")
_ = wait(partition["futures"])
for fut in partition["futures"]:
_ = fut.result() # Ensure that the training job was successful
tnow = time.perf_counter()
print(
f"Best cross-validation metric: {study.best_value}, Time elapsed = {tnow - tstart}"
)
tend = time.perf_counter()
print(f"Total time elapsed = {tend - tstart}")# Make a deep copy to preserve the dictionary after deleting the Dask cluster
best_params_alt = copy.deepcopy(study.best_params)
best_params_altfs = gcsfs.GCSFileSystem()
with fs.open(f"{bucket_name}/params_alt.json", "w") as f:
json.dump(best_params_alt, f)def final_train_alt(best_params):
fs = gcsfs.GCSFileSystem()
with fs.open(f"{bucket_name}/product_weights.pkl", "rb") as f:
product_weights = cudf.DataFrame(pd.read_pickle(f))
params = {
"n_estimators": 100,
"verbosity": 0,
"learning_rate": 0.01,
"objective": "reg:tweedie",
"tree_method": "gpu_hist",
"grow_policy": "depthwise",
"predictor": "gpu_predictor",
"enable_categorical": True,
}
params.update(best_params)
model = {}
train_mask = [0, 1914]
for store_id, store in enumerate(STORES):
for dept_id, dept in enumerate(DEPTS):
print(f"Processing store {store}, department {dept}...")
with fs.open(
f"{bucket_name}/combined_df_store_{store}_dept_{dept}.pkl", "rb"
) as f:
df = cudf.DataFrame(pd.read_pickle(f))
for train_mask, valid_mask in cv_folds:
df_train = df[
(df["day_id"] >= train_mask[0]) & (df["day_id"] < train_mask[1])
]
X_train, y_train = (
df_train.drop(columns=["item_id", "cat_id", "day_id", "sales"]),
df_train["sales"],
)
clf = xgb.XGBRegressor(**params)
clf.fit(X_train, y_train)
model[(store, dept)] = clf
del df
gc.collect()
return modelfs = gcsfs.GCSFileSystem()
with fs.open(f"{bucket_name}/params_alt.json", "r") as f:
best_params_alt = json.load(f)
with fs.open(f"{bucket_name}/product_weights.pkl", "rb") as f:
product_weights = cudf.DataFrame(pd.read_pickle(f))model_alt = final_train_alt(best_params_alt)# Save the model to the Cloud Storage
with fs.open(f"{bucket_name}/final_model_alt.pkl", "wb") as f:
pickle.dump(model_alt, f)test_wrmsse = 0
for store in STORES:
print(f"Processing store {store}...")
# Prediction from Model 1
with fs.open(f"{bucket_name}/combined_df_store_{store}.pkl", "rb") as f:
df = cudf.DataFrame(pd.read_pickle(f))
df_test = df[(df["day_id"] >= holdout[0]) & (df["day_id"] < holdout[1])]
X_test = df_test.drop(columns=["item_id", "dept_id", "cat_id", "day_id", "sales"])
df_test["pred1"] = model[store].predict(X_test)
# Prediction from Model 2
df_test["pred2"] = [np.nan] * len(df_test)
df_test["pred2"] = df_test["pred2"].astype("float32")
for dept in DEPTS:
with fs.open(
f"{bucket_name}/combined_df_store_{store}_dept_{dept}.pkl", "rb"
) as f:
df2 = cudf.DataFrame(pd.read_pickle(f))
df2_test = df2[(df2["day_id"] >= holdout[0]) & (df2["day_id"] < holdout[1])]
X_test = df2_test.drop(columns=["item_id", "cat_id", "day_id", "sales"])
assert np.sum(df_test["dept_id"] == dept) == len(X_test)
df_test["pred2"][df_test["dept_id"] == dept] = model_alt[(store, dept)].predict(
X_test
)
# Average prediction
df_test["avg_pred"] = (df_test["pred1"] + df_test["pred2"]) / 2.0
test_wrmsse += wrmsse(
product_weights,
df,
df_test["avg_pred"],
train_mask=[0, 1914],
valid_mask=holdout,
)
print(f"WRMSSE metric on the held-out test set: {test_wrmsse}")# Close the Dask cluster to clean up
cluster.close()
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/miniforge-cuda/renovate.json
|
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": [
"config:base"
],
"packageRules": [
{
"matchDatasources": ["docker"],
"matchPackageNames": ["condaforge/miniforge3"],
"versioning": "loose"
}
]
}
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/miniforge-cuda/README.md
|
# miniforge-cuda
A simple set of images that install [Miniforge](https://github.com/conda-forge/miniforge) on top of the [nvidia/cuda](https://hub.docker.com/r/nvidia/cuda) images.
These images are intended to be used as a base image for other RAPIDS images. Downstream images can create a user with the `conda` user group which has write access to the base conda environment in the image.
## `latest` tag
The `latest` tag is an alias for the Docker image that has the latest CUDA version, Python version, and Ubuntu version supported by this repository at any given time.
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/miniforge-cuda/matrix.yaml
|
CUDA_VER:
- "11.2.2"
- "11.4.3"
- "11.5.2"
- "11.8.0"
- "12.0.1"
- "12.1.1"
PYTHON_VER:
- "3.9"
- "3.10"
LINUX_VER:
- "ubuntu20.04"
- "ubuntu22.04"
- "centos7"
- "rockylinux8"
IMAGE_REPO:
- "miniforge-cuda"
exclude:
- LINUX_VER: "ubuntu22.04"
CUDA_VER: "11.2.2"
- LINUX_VER: "ubuntu22.04"
CUDA_VER: "11.4.3"
- LINUX_VER: "ubuntu22.04"
CUDA_VER: "11.5.2"
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/miniforge-cuda/Dockerfile
|
ARG CUDA_VER=11.8.0
ARG LINUX_VER=ubuntu22.04
FROM nvidia/cuda:${CUDA_VER}-base-${LINUX_VER}
ARG LINUX_VER
ARG PYTHON_VER=3.10
ARG DEBIAN_FRONTEND=noninteractive
ENV PATH=/opt/conda/bin:$PATH
ENV PYTHON_VERSION=${PYTHON_VER}
# Create a conda group and assign it as root's primary group
RUN groupadd conda; \
usermod -g conda root
# Ownership & permissions based on https://docs.anaconda.com/anaconda/install/multi-user/#multi-user-anaconda-installation-on-linux
COPY --from=condaforge/miniforge3:23.3.1-1 --chown=root:conda --chmod=770 /opt/conda /opt/conda
# Ensure new files are created with group write access & setgid. See https://unix.stackexchange.com/a/12845
RUN chmod g+ws /opt/conda
RUN \
# Ensure new files/dirs have group write/setgid permissions
umask g+ws; \
# install expected Python version
mamba install -y -n base python="${PYTHON_VERSION}"; \
mamba update --all -y -n base; \
find /opt/conda -follow -type f -name '*.a' -delete; \
find /opt/conda -follow -type f -name '*.pyc' -delete; \
conda clean -afy;
# Reassign root's primary group to root
RUN usermod -g root root
RUN \
# ensure conda environment is always activated
ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh; \
echo ". /opt/conda/etc/profile.d/conda.sh; conda activate base" >> /etc/skel/.bashrc; \
echo ". /opt/conda/etc/profile.d/conda.sh; conda activate base" >> ~/.bashrc;
RUN case "${LINUX_VER}" in \
"ubuntu"*) \
apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y --no-install-recommends \
# needed by the ORC library used by pyarrow, because it provides /etc/localtime
tzdata \
# needed by dask/ucx
# TODO: remove these packages once they're available on conda
libnuma1 libnuma-dev \
&& rm -rf "/var/lib/apt/lists/*"; \
;; \
"centos"* | "rockylinux"*) \
yum -y update \
&& yum -y install --setopt=install_weak_deps=False \
# needed by dask/ucx
# TODO: remove these packages once they're available on conda
numactl-devel numactl-libs \
&& yum clean all; \
;; \
*) \
echo "Unsupported LINUX_VER: ${LINUX_VER}" && exit 1; \
;; \
esac
| 0 |
rapidsai_public_repos/miniforge-cuda
|
rapidsai_public_repos/miniforge-cuda/ci/compute-matrix.sh
|
#!/bin/bash
set -euo pipefail
case "${BUILD_TYPE}" in
pull-request)
export PR_NUM="${GITHUB_REF_NAME##*/}"
;;
branch)
;;
*)
echo "Invalid build type: '${BUILD_TYPE}'"
exit 1
;;
esac
yq -o json matrix.yaml | jq -c 'include "ci/compute-matrix"; compute_matrix(.)'
| 0 |
rapidsai_public_repos/miniforge-cuda
|
rapidsai_public_repos/miniforge-cuda/ci/remove-temp-images.sh
|
#!/bin/bash
set -euo pipefail
logout() {
curl -X POST \
-H "Authorization: JWT $HUB_TOKEN" \
"https://hub.docker.com/v2/logout/"
}
trap logout EXIT
HUB_TOKEN=$(
curl -s -H "Content-Type: application/json" \
-X POST \
-d "{\"username\": \"${GPUCIBOT_DOCKERHUB_USER}\", \"password\": \"${GPUCIBOT_DOCKERHUB_TOKEN}\"}" \
https://hub.docker.com/v2/users/login/ | jq -r .token \
)
echo "::add-mask::${HUB_TOKEN}"
full_repo_name=${IMAGE_NAME%%:*}
tag=${IMAGE_NAME##*:}
for arch in $(echo "$ARCHES" | jq .[] -r); do
curl --fail-with-body -i -X DELETE \
-H "Accept: application/json" \
-H "Authorization: JWT $HUB_TOKEN" \
"https://hub.docker.com/v2/repositories/$full_repo_name/tags/$tag-$arch/"
done
| 0 |
rapidsai_public_repos/miniforge-cuda
|
rapidsai_public_repos/miniforge-cuda/ci/compute-matrix.jq
|
def compute_arch($x):
["amd64"] |
if
$x.CUDA_VER > "11.2.2" and
$x.LINUX_VER != "centos7"
then
. + ["arm64"]
else
.
end |
$x + {ARCHES: .};
# Checks the current entry to see if it matches the given exclude
def matches($entry; $exclude):
all($exclude | to_entries | .[]; $entry[.key] == .value);
def compute_repo($x):
if
env.BUILD_TYPE == "pull-request"
then
"staging"
else
$x.IMAGE_REPO
end;
def compute_tag_prefix($x):
if
env.BUILD_TYPE == "branch"
then
""
else
$x.IMAGE_REPO + "-" + env.PR_NUM + "-"
end;
def compute_image_name($x):
compute_repo($x) as $repo |
compute_tag_prefix($x) as $tag_prefix |
"rapidsai/" + $repo + ":" + $tag_prefix + "cuda" + $x.CUDA_VER + "-base-" + $x.LINUX_VER + "-" + "py" + $x.PYTHON_VER |
$x + {IMAGE_NAME: .};
# Checks the current entry to see if it matches any of the excludes.
# If so, produce no output. Otherwise, output the entry.
def filter_excludes($entry; $excludes):
select(any($excludes[]; matches($entry; .)) | not);
def lists2dict($keys; $values):
reduce range($keys | length) as $ind ({}; . + {($keys[$ind]): $values[$ind]});
def compute_matrix($input):
($input.exclude // []) as $excludes |
$input | del(.exclude) |
keys_unsorted as $matrix_keys |
to_entries |
map(.value) |
[
combinations |
lists2dict($matrix_keys; .) |
filter_excludes(.; $excludes) |
compute_arch(.) |
compute_image_name(.)
] |
{include: .};
| 0 |
rapidsai_public_repos/miniforge-cuda
|
rapidsai_public_repos/miniforge-cuda/ci/create-multiarch-manifest.sh
|
#!/bin/bash
set -euo pipefail
LATEST_CUDA_VER=$(yq '.CUDA_VER | sort | .[-1]' matrix.yaml)
LATEST_PYTHON_VER=$(yq -o json '.PYTHON_VER' matrix.yaml | jq -r 'max_by(split(".") | map(tonumber))')
LATEST_UBUNTU_VER=$(yq '.LINUX_VER | map(select(. == "*ubuntu*")) | sort | .[-1]' matrix.yaml)
source_tags=()
tag="${IMAGE_NAME}"
for arch in $(echo "${ARCHES}" | jq .[] -r); do
source_tags+=("${tag}-${arch}")
done
docker manifest create "${tag}" "${source_tags[@]}"
docker manifest push "${tag}"
if [[
"${LATEST_UBUNTU_VER}" == "${LINUX_VER}" &&
"${LATEST_CUDA_VER}" == "${CUDA_VER}" &&
"${LATEST_PYTHON_VER}" == "${PYTHON_VER}"
]]; then
# only create a 'latest' manifest if it is a non-PR workflow.
if [[ "${BUILD_TYPE}" != "pull-request" ]]; then
docker manifest create "rapidsai/${IMAGE_REPO}:latest" "${source_tags[@]}"
docker manifest push "rapidsai/${IMAGE_REPO}:latest"
else
echo "Skipping 'latest' manifest creation for PR workflow."
fi
fi
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/build-metrics-reporter/rapids-build-metrics-reporter.py
|
#
# Copyright (c) 2021-2023, NVIDIA CORPORATION.
#
import argparse
import os
import sys
import xml.etree.ElementTree as ET
from pathlib import Path
from xml.dom import minidom
parser = argparse.ArgumentParser()
parser.add_argument(
"log_file", type=str, default=".ninja_log", help=".ninja_log file"
)
parser.add_argument(
"--fmt",
type=str,
default="terminal",
choices=["csv", "xml", "html", "terminal"],
help="output format (to stdout)",
)
parser.add_argument(
"--msg",
type=str,
default=None,
help="optional text file to include at the top of the html output",
)
parser.add_argument(
"--cmp_log",
type=str,
default=None,
help="optional baseline ninja_log to compare results",
)
args = parser.parse_args()
log_file = args.log_file
output_fmt = args.fmt
cmp_file = args.cmp_log
# build a map of the log entries
def build_log_map(log_file):
entries = {}
log_path = os.path.dirname(os.path.abspath(log_file))
with open(log_file) as log:
last = 0
files = {}
for line in log:
entry = line.split()
if len(entry) > 4:
obj_file = entry[3]
file_size = (
os.path.getsize(os.path.join(log_path, obj_file))
if os.path.exists(obj_file)
else 0
)
start = int(entry[0])
end = int(entry[1])
# logic based on ninjatracing
if end < last:
files = {}
last = end
files.setdefault(entry[4], (entry[3], start, end, file_size))
# build entries from files dict
for entry in files.values():
entries[entry[0]] = (entry[1], entry[2], entry[3])
return entries
# output results in XML format
def output_xml(entries, sorted_list, args):
root = ET.Element("testsuites")
testsuite = ET.Element(
"testsuite",
attrib={
"name": "build-time",
"tests": str(len(sorted_list)),
"failures": str(0),
"errors": str(0),
},
)
root.append(testsuite)
for name in sorted_list:
entry = entries[name]
build_time = float(entry[1] - entry[0]) / 1000
item = ET.Element(
"testcase",
attrib={
"classname": "BuildTime",
"name": name,
"time": str(build_time),
},
)
testsuite.append(item)
tree = ET.ElementTree(root)
xmlstr = minidom.parseString(ET.tostring(root)).toprettyxml(indent=" ")
print(xmlstr)
# utility converts a millisecond value to a column width in pixels
def time_to_width(value, end):
# map a value from (0,end) to (0,1000)
r = (float(value) / float(end)) * 1000.0
return int(r)
# assign each entry to a thread by analyzing the start/end times and
# slotting them into thread buckets where they fit
def assign_entries_to_threads(entries):
# first sort the entries' keys by end timestamp
sorted_keys = sorted(
list(entries.keys()), key=lambda k: entries[k][1], reverse=True
)
# build the chart data by assigning entries to threads
results = {}
threads = []
for name in sorted_keys:
entry = entries[name]
# assign this entry by finding the first available thread identified
# by the thread's current start time greater than the entry's end time
tid = -1
for t in range(len(threads)):
if threads[t] >= entry[1]:
threads[t] = entry[0]
tid = t
break
# if no current thread found, create a new one with this entry
if tid < 0:
threads.append(entry[0])
tid = len(threads) - 1
# add entry name to the array associated with this tid
if tid not in results.keys():
results[tid] = []
results[tid].append(name)
# first entry has the last end time
end_time = entries[sorted_keys[0]][1]
# return the threaded entries and the last end time
return (results, end_time)
# format the build-time
def format_build_time(input_time):
build_time = abs(input_time)
build_time_str = str(build_time) + " ms"
if build_time > 120000: # 2 minutes
minutes = int(build_time / 60000)
seconds = int(((build_time / 60000) - minutes) * 60)
build_time_str = "{:d}:{:02d} min".format(minutes, seconds)
elif build_time > 1000:
build_time_str = "{:.3f} s".format(build_time / 1000)
if input_time < 0:
build_time_str = "-" + build_time_str
return build_time_str
# format file size
def format_file_size(input_size):
file_size = abs(input_size)
file_size_str = ""
if file_size > 1000000:
file_size_str = "{:.3f} MB".format(file_size / 1000000)
elif file_size > 1000:
file_size_str = "{:.3f} KB".format(file_size / 1000)
elif file_size > 0:
file_size_str = str(file_size) + " bytes"
if input_size < 0:
file_size_str = "-" + file_size_str
return file_size_str
# Output chart results in HTML format
# Builds a standalone html file with no javascript or styles
def output_html(entries, sorted_list, cmp_entries, args):
print("<html><head><title>Build Metrics Report</title>")
print("</head><body>")
if args.msg is not None:
msg_file = Path(args.msg)
if msg_file.is_file():
msg = msg_file.read_text()
print("<p>", msg, "</p>")
# map entries to threads
# the end_time is used to scale all the entries to a fixed output width
threads, end_time = assign_entries_to_threads(entries)
# color ranges for build times
summary = {"red": 0, "yellow": 0, "green": 0, "white": 0}
red = "bgcolor='#FFBBD0'"
yellow = "bgcolor='#FFFF80'"
green = "bgcolor='#AAFFBD'"
white = "bgcolor='#FFFFFF'"
# create the build-time chart
print("<table id='chart' width='1000px' bgcolor='#BBBBBB'>")
for tid in range(len(threads)):
names = threads[tid]
# sort the names for this thread by start time
names = sorted(names, key=lambda k: entries[k][0])
# use the last entry's end time as the total row size
# (this is an estimate and does not have to be exact)
last_entry = entries[names[len(names) - 1]]
last_time = time_to_width(last_entry[1], end_time)
print(
"<tr><td><table width='",
last_time,
"px' border='0' cellspacing='1' cellpadding='0'><tr>",
sep="",
)
prev_end = 0 # used for spacing between entries
# write out each entry for this thread as a column for a single row
for name in names:
entry = entries[name]
start = entry[0]
end = entry[1]
# this handles minor gaps between end of the
# previous entry and the start of the next
if prev_end > 0 and start > prev_end:
size = time_to_width(start - prev_end, end_time)
print("<td width='", size, "px'></td>")
# adjust for the cellspacing
prev_end = end + int(end_time / 500)
build_time = end - start
build_time_str = format_build_time(build_time)
# assign color and accumulate legend values
color = white
if build_time > 300000: # 5 minutes
color = red
summary["red"] += 1
elif build_time > 120000: # 2 minutes
color = yellow
summary["yellow"] += 1
elif build_time > 1000: # 1 second
color = green
summary["green"] += 1
else:
summary["white"] += 1
# compute the pixel width based on build-time
size = max(time_to_width(build_time, end_time), 2)
# output the column for this entry
print("<td height='20px' width='", size, "px' ", sep="", end="")
# title text is shown as hover-text by most browsers
print(color, "title='", end="")
print(name, "\n", build_time_str, "' ", sep="", end="")
# centers the name if it fits in the box
print("align='center' nowrap>", end="")
# use a slightly smaller, fixed-width font
print("<font size='-2' face='courier'>", end="")
# add the file-name if it fits, otherwise, truncate the name
file_name = os.path.basename(name)
if len(file_name) + 3 > size / 7:
abbr_size = int(size / 7) - 3
if abbr_size > 1:
print(file_name[:abbr_size], "...", sep="", end="")
else:
print(file_name, end="")
# done with this entry
print("</font></td>")
# update the entry with just the computed output info
entries[name] = (build_time, color, entry[2])
# add a filler column at the end of each row
print("<td width='*'></td></tr></table></td></tr>")
# done with the chart
print("</table><br/>")
# output detail table in build-time descending order
print("<table id='detail' bgcolor='#EEEEEE'>")
print(
"<tr><th>File</th>", "<th>Compile time</th>", "<th>Size</th>", sep=""
)
if cmp_entries:
print("<th>t-cmp</th>", sep="")
print("</tr>")
for name in sorted_list:
entry = entries[name]
build_time = entry[0]
color = entry[1]
file_size = entry[2]
build_time_str = format_build_time(build_time)
file_size_str = format_file_size(file_size)
# output entry row
print("<tr ", color, "><td>", name, "</td>", sep="", end="")
print("<td align='right'>", build_time_str, "</td>", sep="", end="")
print("<td align='right'>", file_size_str, "</td>", sep="", end="")
# output diff column
cmp_entry = (
cmp_entries[name] if cmp_entries and name in cmp_entries else None
)
if cmp_entry:
diff_time = build_time - (cmp_entry[1] - cmp_entry[0])
diff_time_str = format_build_time(diff_time)
diff_color = white
diff_percent = int((diff_time / build_time) * 100)
if build_time > 60000:
if diff_percent > 20:
diff_color = red
diff_time_str = "<b>" + diff_time_str + "</b>"
elif diff_percent < -20:
diff_color = green
diff_time_str = "<b>" + diff_time_str + "</b>"
elif diff_percent > 0:
diff_color = yellow
print(
"<td align='right' ",
diff_color,
">",
diff_time_str,
"</td>",
sep="",
end="",
)
print("</tr>")
print("</table><br/>")
# include summary table with color legend
print("<table id='legend' border='2' bgcolor='#EEEEEE'>")
print("<tr><td", red, ">time > 5 minutes</td>")
print("<td align='right'>", summary["red"], "</td></tr>")
print("<tr><td", yellow, ">2 minutes < time < 5 minutes</td>")
print("<td align='right'>", summary["yellow"], "</td></tr>")
print("<tr><td", green, ">1 second < time < 2 minutes</td>")
print("<td align='right'>", summary["green"], "</td></tr>")
print("<tr><td", white, ">time < 1 second</td>")
print("<td align='right'>", summary["white"], "</td></tr>")
print("</table>")
if cmp_entries:
print("<table id='legend' border='2' bgcolor='#EEEEEE'>")
print("<tr><td", red, ">time increase > 20%</td></tr>")
print("<tr><td", yellow, ">time increase > 0</td></tr>")
print("<tr><td", green, ">time decrease > 20%</td></tr>")
print(
"<tr><td",
white,
">time change < 20%% or build time < 1 minute</td></tr>",
)
print("</table>")
print("</body></html>")
# output results in CSV format
def output_csv(entries, sorted_list, cmp_entries, args):
print("time,size,file", end="")
if cmp_entries:
print(",diff", end="")
print()
for name in sorted_list:
entry = entries[name]
build_time = entry[1] - entry[0]
file_size = entry[2]
cmp_entry = (
cmp_entries[name] if cmp_entries and name in cmp_entries else None
)
print(build_time, file_size, name, sep=",", end="")
if cmp_entry:
diff_time = build_time - (cmp_entry[1] - cmp_entry[0])
print(",", diff_time, sep="", end="")
print()
def output_terminal(entries, sorted_list, cmp_entries, args):
for name in sorted_list:
entry = entries[name]
build_time_sec = (entry[1] - entry[0]) / 1000.0
file_size = entry[2]
if file_size < 2**20:
# Less than 1MB
file_size_str = f"{file_size // 1024 : 4d}K"
else:
file_size_str = f"{file_size // (1024 * 1024) : 4d}M"
if cmp_entries is None:
print(f"{build_time_sec:6.1f}s {file_size_str} {name}")
else:
cmp_entry = cmp_entries.get(name, None)
if cmp_entry is None:
diff_str = " "
else:
cmp_build_time_sec = (cmp_entry[1] - cmp_entry[0]) / 1000.0
time_diff_abs = build_time_sec - cmp_build_time_sec
time_diff_rel = 100.0 * (time_diff_abs / cmp_build_time_sec)
diff_str = f"{time_diff_abs:+6.1f}s {time_diff_rel:+5.1f}%"
print(f"{build_time_sec:6.1f}s {diff_str} {file_size_str} {name}")
# parse log file into map
entries = build_log_map(log_file)
if len(entries) == 0:
print("Could not parse", log_file)
exit()
# sort the entries by build-time (descending order)
sorted_list = sorted(
list(entries.keys()),
key=lambda k: entries[k][1] - entries[k][0],
reverse=True,
)
# load the comparison build log if available
cmp_entries = build_log_map(cmp_file) if cmp_file else None
if output_fmt == "xml":
output_xml(entries, sorted_list, args)
elif output_fmt == "html":
output_html(entries, sorted_list, cmp_entries, args)
elif output_fmt == "csv":
output_csv(entries, sorted_list, cmp_entries, args)
else:
output_terminal(entries, sorted_list, cmp_entries, args)
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/build-metrics-reporter/rapids-template-instantiation-reporter.py
|
#!/usr/bin/env python3
import argparse
import subprocess
from subprocess import PIPE
import shutil
from collections import Counter
from pathlib import Path
def log(msg, verbose=True):
if verbose:
print(msg)
def run(*args, **kwargs):
return subprocess.run(list(args), check=True, **kwargs)
def progress(iterable, display=True):
if display:
print()
for i, it in enumerate(iterable):
if display:
print(f"\rProgress: {i}", end="", flush=True)
yield it
if display:
print()
def extract_template(line):
# Example line:
# Function void raft::random::detail::rmat_gen_kernel<long, double>(T1 *, T1 *, T1 *, const T2 *, T1, T1, T1, T1, raft::random::RngState):
line = line.replace("Function", "").replace("void", "").strip()
if "<" in line:
line = line.split("<")[0]
# Example return: raft::random::detail::rmat_gen_kernel
return line
def get_kernels(cuobjdump, cu_filt, grep, object_file_path):
try:
# Executes:
# > cuobjdump -res-usage file | cu++filt | grep Function
step1 = run(cuobjdump, "-res-usage", object_file_path, stdout=PIPE)
step2 = run(cu_filt, input=step1.stdout, stdout=PIPE)
step3 = run(grep, "Function", input=step2.stdout, stdout=PIPE)
out_str = step3.stdout.decode(encoding="utf-8", errors="strict")
return [extract_template(line) for line in out_str.splitlines()]
except Exception as e:
print(e)
return []
def get_object_files(ninja, build_dir, target):
# Executes:
# > ninja -C build/dir -t input <target>
build_dir = Path(build_dir)
out = run(ninja, "-C", build_dir, "-t", "inputs", target, stdout=PIPE)
out_str = out.stdout.decode(encoding="utf-8", errors="strict")
target_path = build_dir / target
# If the target exists and is an object file, add it to the list of
# candidates.
if target_path.exists() and str(target_path).endswith(".o"):
additional_objects = [target_path]
else:
additional_objects = []
return [
str(build_dir / line.strip())
for line in out_str.splitlines()
if line.endswith(".o")
] + additional_objects
def main(
build_dir,
target,
top_n,
skip_details=False,
skip_kernels=False,
skip_objects=False,
display_progress=True,
verbose=True,
):
# Check that we have the right binaries in the environment.
binary_names = ["ninja", "grep", "cuobjdump", "cu++filt"]
binaries = list(map(shutil.which, binary_names))
ninja, grep, cuobjdump, cu_filt = binaries
fail_on_bins = any(b is None for b in binaries)
if fail_on_bins:
for path, name in zip(binaries, binary_names):
if path is None:
print(f"Could not find {name}. Make sure {name} is in PATH.")
exit(1)
for path, name in zip(binaries, binary_names):
log(f"Found {name}: {path}", verbose=verbose)
# Get object files from target:
object_files = get_object_files(ninja, build_dir, target)
# Compute the counts of each object-kernel combination
get_kernel_bins = (cuobjdump, cu_filt, grep)
obj_kernel_tuples = (
(obj, kernel)
for obj in object_files
for kernel in get_kernels(*get_kernel_bins, obj)
)
obj_kernel_counts = Counter(
tup for tup in progress(obj_kernel_tuples, display=display_progress)
)
# Create an index with the kernel counts per object and the object count per kernel:
obj2kernel = dict()
kernel2obj = dict()
kernel_counts = Counter()
obj_counts = Counter()
for (obj, kernel), count in obj_kernel_counts.items():
# Update the obj2kernel and kernel2obj index:
obj2kernel_ctr = obj2kernel.setdefault(obj, Counter())
kernel2obj_ctr = kernel2obj.setdefault(kernel, Counter())
obj2kernel_ctr += Counter({kernel: count})
kernel2obj_ctr += Counter({obj: count})
# Update counters:
kernel_counts += Counter({kernel: count})
obj_counts += Counter({obj: count})
# Print summary statistics
if not skip_objects:
print("\nObjects with most kernels")
print("=========================\n")
for obj, total_count in obj_counts.most_common()[:top_n]:
print(
f"{total_count:4d} kernel instances in {obj} ({len(obj2kernel[obj])} kernel templates)"
)
if not skip_kernels:
print("\nKernels with most instances")
print("===========================\n")
for kernel, total_count in kernel_counts.most_common()[:top_n]:
print(
f"{total_count:4d} instances of {kernel} in {len(kernel2obj[kernel])} objects."
)
if skip_details:
return
if not skip_objects:
print("\nDetails: Objects")
print("================\n")
for obj, total_count in obj_counts.most_common()[:top_n]:
print(
f"{total_count:4d} kernel instances in {obj} across {len(obj2kernel[obj])} templates:"
)
for kernel, c in obj2kernel[obj].most_common():
print(f" {c:4d}: {kernel}")
print()
if not skip_kernels:
print("\nDetails: Kernels")
print("================\n")
for kernel, total_count in kernel_counts.most_common()[:top_n]:
print(
f"{total_count:4d} instances of {kernel} in {len(kernel2obj[kernel])} objects:"
)
for obj, c in kernel2obj[kernel].most_common():
print(f" {c:4d}: {obj}")
print()
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("target", type=str, help="The ninja target to investigate")
parser.add_argument(
"--build-dir",
type=str,
default="./",
help="Build directory",
)
parser.add_argument(
"--top-n",
type=int,
default=None,
help="Log only top N most common objects/kernels",
)
parser.add_argument(
"--skip-details",
action="store_true",
help="Show a summary of statistics, but no details.",
)
parser.set_defaults(skip_details=False)
parser.add_argument(
"--no-progress", action="store_true", help="Do not show progress indication"
)
parser.set_defaults(no_progress=False)
parser.add_argument(
"--skip-objects", action="store_true", help="Do not show statistics on objects"
)
parser.set_defaults(skip_objects=False)
parser.add_argument(
"--skip-kernels", action="store_true", help="Do not show statistics on kernels"
)
parser.set_defaults(skip_kernels=False)
parser.add_argument("--verbose", action="store_true")
parser.set_defaults(verbose=False)
args = parser.parse_args()
main(
args.build_dir,
args.target,
args.top_n,
skip_details=args.skip_details,
skip_kernels=args.skip_kernels,
skip_objects=args.skip_objects,
display_progress=not args.no_progress,
verbose=args.verbose,
)
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/build-metrics-reporter/README.md
|
# build-metrics-reporter
## Summary
This repository contains the source code for `rapids-build-metrics-reporter.py`, which is a small Python script that can be used to generate a report that contains the compile times and cache hit rates for RAPIDS library builds.
It is intended to be used in the `build.sh` script of any given RAPIDS repository like this:
```sh
if ! rapids-build-metrics-reporter.py 2> /dev/null && [ ! -f rapids-build-metrics-reporter.py ]; then
echo "Downloading rapids-build-metrics-reporter.py"
curl -sO https://raw.githubusercontent.com/rapidsai/build-metrics-reporter/v1/rapids-build-metrics-reporter.py
fi
PATH=".:$PATH" rapids-build-metrics-reporter.py
```
The logic in the excerpt above ensures that `rapids-build-metrics-reporter.py` can be used in CI (where it will be pre-installed in RAPIDS CI images) and local environments (where it will be downloaded to the local filesystem).
## Versioning
To avoid the overhead of a PyPI package for such a trivial script, this repository uses versioned branches to track breaking changes.
Any breaking changes should be made to new versioned branches (e.g. `v2`, `v3`, etc.).
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/cloud-ml-examples/README.md
|
# <div align="left"><img src="img/rapids_logo.png" width="90px"/> RAPIDS Cloud Machine Learning Services Integration</div>
RAPIDS is a suite of open-source libraries that bring GPU acceleration
to data science pipelines. Users building cloud-based machine learning experiments can take advantage of this acceleration
throughout their workloads to build models faster, cheaper, and more
easily on the cloud platform of their choice.
This repository provides example notebooks and "getting started" code
samples to help you integrate RAPIDS with the hyperparameter
optimization services from Azure ML, AWS Sagemaker, Google
Cloud, and Databricks. The directory for each cloud contains a step-by-step guide to
launch an example hyperparameter optimization job. Each example job will use RAPIDS
[cuDF](https://github.com/rapidsai/cudf) to load and preprocess
data and use [cuML](https://github.com/rapidsai/cuml) or [XGBoost](https://github.com/dmlc/xgboost) for GPU-accelerated model training. RAPIDS also integrates easily with MLflow to track and orchestrate experiments from any of these frameworks.
For large datasets, you can find example notebooks using [Dask](https://github.com/dask/dask) to load data and train models on multiple GPUs in the same instance or in a multi-node multi-GPU cluster.
#### Notebooks with a ✅ are fully functional as of RAPIDS Release 22.08, Notebooks with a ❌ require an update, or replacement.
| Cloud / Framework | HPO Example | Multi-node multi-GPU Example|
| - | - | - |
| **Microsoft Azure** | [Azure ML HPO ❌](https://github.com/rapidsai/cloud-ml-examples/blob/main/azure/README.md "Azure Deployment Guide") | [Multi-node multi-GPU cuML on Azure ❌](https://github.com/rapidsai/cloud-ml-examples/tree/main/azure#2-rapids-mnmg-example-using-dask-cloud-provider "Azure MNMG notebook") |
| **Amazon Web Services (AWS)** | [AWS SageMaker HPO ✅](https://github.com/rapidsai/cloud-ml-examples/blob/main/aws/README.md "SageMaker Deployment Guide") <br /> [Scaling up hyperparameter optimization with Kubernetes and XGBoost GPU algorithm ✅](https://github.com/rapidsai/cloud-ml-examples/blob/main/k8s-dask/notebooks/xgboost-gpu-hpo-job-parallel-k8s.ipynb) |
| **Google Cloud Platform (GCP)** | [Google AI Platform HPO ❌](https://github.com/rapidsai/cloud-ml-examples/blob/main/gcp/README.md "GCP Deployment Guide") <br /> [Scaling up hyperparameter optimization with Kubernetes and XGBoost GPU algorithm ✅](https://github.com/rapidsai/cloud-ml-examples/blob/main/k8s-dask/notebooks/xgboost-gpu-hpo-job-parallel-k8s.ipynb) | [Multi-node multi-GPU XGBoost and cuML on Google Kubernetes Engine (GKE) ✅](./dask/kubernetes/Dask_cuML_Exploration_Full.ipynb)
| **Dask** | [Dask-ML HPO ✅](https://github.com/rapidsai/cloud-ml-examples/tree/main/dask "Dask-ML Deployment Guide") | [Multi-node multi-GPU XGBoost and cuML ✅](./dask/kubernetes/Dask_cuML_Exploration.ipynb) |
| **Databricks** | [Hyperopt and MLflow on Databricks ✅](https://github.com/rapidsai/cloud-ml-examples/blob/main/databricks/README.md "Databricks Cloud Deployment Guide") |
| **MLflow** | [Hyperopt and MLflow on GKE ✅](https://github.com/rapidsai/cloud-ml-examples/blob/main/mlflow/docker_environment/README.md "Kubernetes MLflow Deployment with RAPIDS") |
| **Optuna** | [Dask-Optuna HPO ✅](https://github.com/rapidsai/cloud-ml-examples/blob/main/optuna/notebooks/optuna_rapids.ipynb "Dask-Optuna notebook") <br /> [Optuna on Azure ML ❌](https://github.com/rapidsai/cloud-ml-examples/blob/main/optuna/notebooks/azure-optuna/run_optuna.ipynb "Optuna on Azure notebook")|
| **Ray Tune** | [Ray Tune HPO ❌](https://github.com/rapidsai/cloud-ml-examples/tree/main/ray "RayTune Deployment Guide") |
---
## Quick Start Using RAPIDS Cloud ML Container
The [Cloud ML Docker Repository](https://hub.docker.com/r/rapidsai/rapidsai-cloud-ml) provides a ready to run Docker container with RAPIDS and libraries/SDKs for AWS SageMaker, Azure ML and Google AI Platfrom HPO examples.
### Pull Docker Image:
```shell script
docker pull rapidsai/rapidsai-cloud-ml:22.10-cuda11.5-base-ubuntu20.04-py3.9
```
### Build Docker Image:
From the root cloud-ml-examples directory:
```shell script
docker build --tag rapidsai-cloud-ml:latest --file ./common/docker/Dockerfile.training.unified ./
```
## Bring Your Own Cloud (Dask and Ray)
In addition to public cloud HPO options, the respository also includes
"BYOC" sample notebooks that can be run on the public cloud or private
infrastructure of your choice, these leverage [Ray Tune](https://docs.ray.io/en/master/tune/index.html) or [Dask-ML](https://ml.dask.org/) for distributed infrastructure.
Check out the [RAPIDS HPO](https://rapids.ai/hpo.html) webpage for video tutorials and blog posts.

| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/cloud-ml-examples/LICENSE
|
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2018 NVIDIA CORPORATION
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 0 |
rapidsai_public_repos/cloud-ml-examples/mlflow
|
rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/Dockerfile.training
|
FROM rapidsai/rapidsai:cuda11.0-runtime-ubuntu18.04-py3.8
RUN source activate rapids \
&& mkdir /opt/mlflow \
&& pip install \
boto3 \
google-cloud \
google-cloud-storage \
gcsfs \
hyperopt \
mlflow \
psycopg2-binary
| 0 |
rapidsai_public_repos/cloud-ml-examples/mlflow
|
rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/DetailedConfig.md
|
# [Detailed Google Kubernetes Engine (GKE) Guide](#anchor-start)
### Baseline
For all steps referring to the Google Cloud Platform (GCP) console window, components can be selected from the 'Huburger Button'
on the top left of the console.

## [Create a GKE Cluster](#anchor-create-cluster)
__Verify Adequate GPU Quotas__
Depending on your account limitations, you may have restricted numbers and types of GPUs that can be allocated within a given zone or region.
- Navigate to your [GCP Console](https://console.cloud.google.com/)
- Select "IAM & Admin" $\rightarrow$ "Quotas"

- Filter by the type of GPU you want to add ex ‘T4’, ‘V100’, etc...
- Select __ALL QUOTAS__ under the __Details__ column
- If you find that you’re not given the option to assign GPUs during cluster configuration, re-check that you have an allocated quota.
## [Configure the Cluster hardware](#anchor-configure-cluster)
Once you’ve verified that you can allocate the necessary hardware for your cluster, you’ll need to go through the process
of creating a new cluster, and configuring node pools that will host your services, and run MLflow jobs.
__Allocate the Appropriate Hardware__
- Navigate to your [GCP Console](https://console.cloud.google.com/)
- Select __Kubernetes Engine__ $\rightarrow$ __Clusters__
- Create a new cluster; for our purposes we'll assume the following values:

- Create two sets of node-pools: __cpu-pool__ and __gpu-pool__
- This is a good practice that can help reduce overall costs, so that you are not running CPU only
tasks on GPU enable nodes. GKE will automatically taint GPU nodes so that they will be unavailble for
tasks that do not request a GPU.
- __CPU Pool__


- __GPU Pool__


- Click __Create__ and wait for your cluster to come up.
## [Configure Kubectl](#anchor-kubectl)
__Obtain Kubectl Cluster Credentials from GKE.__
- First, be sure that [Kubectl is installed](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
- Once your cluster appears to be up, running, and reported green by GKE, we need to use __glcoud__ to configure kubectl
with the correct credentials.
```shell script
gcloud container clusters get-credentials rapids-mlflow-test --region us-east1-c
```
- Once this command completes, your kubektl's default configuration should be pointing to your GKE cluster instance,
and able to interact with it.
```shell script
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* gke_[YOUR_CLUSTER] gke_[YOUR_CLUSTER] gke_[YOUR_CLUSTER] default
```
```shell script
kubectl get all
```
__Create an Up to Date NVIDIA Driver Installer.__
As of this writing, this step is necessary to ensure that a CUDA 11 compatible driver (450+) is installed on your worker nodes.
```shell script
kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-nvidia-v450.yaml
```
## [Create a Storage Bucket and Make it Accessible from GKE](#anchor-create-storage-bucket)
You need to create a storage bucket that can read and write to from your GKE cluster. This will host the training data,
as well as providing an endpoint for MLflow to use as artifact storage. For this, you’ll need two items: the bucket itself,
and a service account that you can use to authenticate with, from GKE.
[__Create a Service Account__](#anchor-create-service-account)
- Navigate to your [GCP Console](https://console.cloud.google.com/)
- Select "IAM & Admin" $\rightarrow$ "Service Acccounts"

- Create a new account; for our purposes set:
- Service Account Name: __test-svc-acct__
- Service Account ID: __test-svc-acct@[project].iam.gserviceaccount.com__
- Skip optional steps and click 'Done'.
- Find your service account in the account list, and click on the navigation dots in the far right cell, titled __Actions__,
select __Create Key__.
- You will be prompted to create and download a private key.
Leave the key type as 'JSON' and click create.
- Save the file you are prompted with as __keyfile.json__
- You will inject this, as a secret, into your GKE cluster.
## [Create a Storage Bucket and Attach Your Service Account](#anchor-config-storage-bucket)
- Navigate to your [GCP Console](https://console.cloud.google.com/)
- Select __Storage__ $\rightarrow$ __Browser__

- Create a new bucket, with a unique name
- I'll refer to this as __\${YOUR_BUCKET}__
- Leave the default settings and click __Create__
- Now find your bucket in the storage browser list, and click on it.
- Click on __Permissions__ $\rightarrow$ __Add__


- Add the service account you created in the previous step, and give it the role of __Storage Object Admin__ so that it will be
able to read and write to your bucket.

- Click __Save__
| 0 |
rapidsai_public_repos/cloud-ml-examples/mlflow
|
rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/k8s_config.json
|
{
"kube-context": "",
"kube-job-template-path": "k8s_job_template.yaml",
"repository-uri": "${GCR_REPO}/rapids-mlflow-training"
}
| 0 |
rapidsai_public_repos/cloud-ml-examples/mlflow
|
rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/README.md
|
# End to End - RAPIDS, hyperopt, and MLflow, on Google Kubernetes Engine (GKE).
## Overview
This example will go through the process of setting up all the components to run your own RAPIDS based hyper-parameter
training, with custom MLflow backend service, artifact storage, and Tracking Server using Google's Cloud Platform (GCP),
and kuberntes engine (GKE).
By the end of this guide, you will have:
* Deployed a Postgresql database which will serve as your MLflow backing service.
* Deployed an MLflow Tracking Server, which will act as our REST interface to coordinate logging of training parameters,
and metrics.
* Configured a GCP storage bucket endpoint that will act as an MLflow artifact store.
* Trained a RAPIDS machine learning model on your Kuberntes cluster using the MLflow CLI, and saved it using MLflow's
model registry tools.
* Run an example MLflow serving task, using your registered model, which is able to predict whether or not a flight will
be late.
## Pre-requisites
__For the purposes of this example we'll be using a subset of the 'airline' dataset. From this, we will build a
simple random forest classifier that predicts whether or not a flight will be LATE or ON-TIME.__
***
- **Basic Environment Requirements**
- You will need to have a GKE cluster running, with at least 1 CPU node, and at least 1 GPU node (P100/V100/T4). For
detailed instructions on this process, see this [Guide](./DetailedConfig.md).
- **Note:** For the purposes of this demo, we will assume our Kubernetes nodes are provisioned as follows:
- Version: __1.17.9-gke.6300__
- Image: __Container-Optimized OS(cos)__
- Node Pools:
- cpu-pool: 2 nodes (n1-standard-04)
- gpu-pool: 1 node (n1-standard-08 with T4 GPU)

- **Note:** There will be some configuration parameters specific to your GKE cluster, we will refer to these as
follows:
- YOUR_PROJECT: The name of the GCP project where your GKE cluster lives.
- YOUR_REPO: The name of the repository where your GCR images will live.
- YOUR_BUCKET: The name of the GCP storage bucket where your data will be stored.
- MLFLOW_TRACKING_UI: The URI of the tracking server we will deploy to your GKE cluster.
- For more information, check out [GKE Overview](https://cloud.google.com/kubernetes-engine/docs/concepts/kubernetes-engine-overview).
- You will need to have your `kubectl` utility configured with the credentials for your GKE cluster instance.
- For more information, check out [GKE cluster's context](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#generate_kubeconfig_entry).
- To check that you're using the right context:
```shell script
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* gke_[YOUR_CLUSTER] gke_[YOUR_CLUSTER] gke_[YOUR_CLUSTER]
```
- You will need to upload files to a GCP bucket that your cluster project has access to.
- For more information, check out [GCP's documentation](https://cloud.google.com/storage/docs/uploading-objects).
- Configure Authentication:
- You will need to create a [GCP Service Account](https://cloud.google.com/iam/docs/creating-managing-service-accounts),
__Create a Key__ and store the resulting keyfile.json.
- Using the GCP console, navigate to ${YOUR_BUCKET}, select **permissions**, and add your newly created service
account with 'Storage Object Admin' permissions.
- Get the mlflow command line tools and python libraries.
- These are necessary to use the MLflow CLI tool on your workstation, and satisfy its requirements for interacting
with GCP.
```shell script
pip install mlflow gcsfs google-cloud google-cloud-storage kubernetes
```
- Install the most recent NVIDIA daemonset driver:
- As of this writing, this was necessary for CUDA 11 compatibility. It may be an optional step for subsequent GKE
release versions.
```shell script
kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-nvidia-v450.yaml
```
- **Add all our data to a GCP storage bucket**
- Training Data
- Download:
```shell script
wget -N https://rapidsai-cloud-ml-sample-data.s3-us-west-2.amazonaws.com/airline_small.parquet
```
- Push to your storage bucket
- We'll assume that it lives at GS_DATA_PATH: `gs://${YOUR_BUCKET}/airline_small.parquet`
- Environment Setup
- Upload envs/conda.yaml to your GCP cloud storage bucket
- We'll assume that it lives at GS_CONDA_PATH: `gs://${YOUR_BUCKET}/conda.yaml`
- Create a sub-folder in your GCP storage bucket for MLflow artifacts
- We'll assume that it lives at GS_ARTIFACT_PATH: `gs://${YOUR_BUCKET}/artifacts`
- Decide on a GCR image naming convention
- We'll assume that it lives at GCR_REPO: `gcr.io/${YOUR_PROJECT}/${YOUR_REPO}` \
for example: `gcr.io/my-gcp-project/example-repo`
## Setting Up a Cluster Environment
**This section is meant to act as a quick start guide, if you're interested in the configuration details, or want to adapt
this example for your specific needs, be sure to check out the more detailed links provided.**
***
- **Create a Secret for our GCP authentication file**
- **Note:** If something goes wrong with this step, you will likely encounter 'permission denied' errors during the
training process below. Ensure that ${YOUR_BUCKET} has the service account as a member, and is assigned the necessary
roles.
- The `keyfile.json` that you obtained for your GCP service account will be mapped into our tracking server,
and training containers as `/etc/secrets/keyfile.json`. For this, we'll expose the keyfile as a secret within our
Kubernetes cluster.
```shell script
kubectl create secret generic gcsfs-creds --from-file=./keyfile.json
```
- This will subsequently be mounted as a secret volume, on deployment. For example, see `k8s_job_template.yaml`.
- **Backend Service Deployment using Postgres**.
- For this step, we will be using [Bitnami's Postgresql Helm](https://artifacthub.io/packages/helm/bitnami/postgresql)
package.
- Install our postgres database into our GKE cluster via Helm chart. In this situation, Postgresql will act as our
MLflow [Backend Store](https://www.mlflow.org/docs/latest/tracking.html#backend-stores), responsible for storing
metrics, parameters, and model registration information.
```shell script
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install mlf-db bitnami/postgresql --set postgresqlDatabase=mlflow_db --set postgresqlPassword=mlflow \
--set service.type=NodePort
```
- **Tracking Server deployment**
- Build a docker container that will host our MLflow [Tracking Service](https://www.mlflow.org/docs/latest/tracking.html#mlflow-tracking-servers)
and publish it to our GCR repository. The tracking service will provide a REST interface exposed via kubernetes load
balancer, which should be the target of our **MLFLOW_TRACKING_URI** variable.
```shell script
docker build --tag ${GCR_REPO}/mlflow-tracking-server:gcp --file Dockerfile.tracking .
docker push ${GCR_REPO}/mlflow-tracking-server:latest
```
- Install the mlflow tracking server
```shell script
cd helm
helm install mlf-ts ./mlflow-tracking-server \
--set env.mlflowArtifactPath=${GS_ARTIFACT_PATH} \
--set env.mlflowDBAddr=mlf-db-postgresql \
--set service.type=LoadBalancer \
--set image.repository=${GCR_REPO}/mlflow-tracking-server \
--set image.tag=gcp
```
- Obtain the load balancer IP for the tracking server.
```shell script
watch kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mlf-ts-mlflow-tracking-server LoadBalancer 10.0.3.220 <pending> 80:30719/TCP 01m
....
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mlf-ts-mlflow-tracking-server LoadBalancer 10.0.3.220 [MLFLOW_TRACKING_SERVER] 80:30719/TCP 05m
```
- Verify that our tracking server is running, and its UI is available.
- Point your web browser at: `http://${MLFLOW_TRACKING_URI}`, and verify that you are presented with a clean MLflow
UI, as below.

---
**At this point, we should have a setup that looks like the diagram below, and we're ready to submit experiments to
our kubernetes cluster using the mlflow CLI tool.**

## Running MLflow experiments using kubernetes as a backend for job submission.
**MLflow's CLI utility allows us to launch project experiments/training as Kubernetes jobs. This section will go over
the process of setting the appropriate `kubectl` configuration, launching jobs, and viewing the results**
***
**Create and publish a training container**
- **Build a training container**.
- This creates the base docker container that MLflow will inject our project into, and deploy into our Kubernetes cluster
for training.
- `docker build --tag rapids-mlflow-training:gcp --file Dockerfile.training .`
**Configure your MLProject to use Kubernetes**
- **Export MLFLOW_TRACKING_URI**
- This should be the REST endpoint exposed by our tracking server (see above).
- `export MLFLOW_TRACKING_URI=http://${MLFLOW_TRACKING_URI}`
- **Edit `k8s_config.json`**
- `kube-context`: This should be set to to your GKE cluster's context/credentials NAME field.
- `kube-job-template-path`: This is the path to your Kubernetes job template, should be `k8s_job_template.yaml`
- `repository-uri`: Your GCR endpoint, ex. `${GCR_REPO}/rapids-mlflow-training`
**Run RAPIDS + hyperopt experiment in MLflow + Kubernetes**
- **Launch a new experiment**.
- **Note:** The first time this is run, it can take some time as the training container is pulled to the training node.
Subsequent runs will be significantly faster.
```shell script
mlflow run . --backend kubernetes \
--backend-config ./k8s_config.json -e hyperopt \
--experiment-name RAPIDS-MLFLOW-GCP \
-P conda-env=${GS_CONDA_PATH} -P fpath=${GS_DATA_PATH}
```
- **Log in to your tracking server and verify that the experiment was successfully logged, and that you have a registered
model**.


- **Serve a trained model, locally**
- Set you service account credentials
```shell script
export MLFLOW_TRACKING_URI=http://${MLFLOW_TRACKING_URI}
export GOOGLE_APPLICATION_CREDENTIALS=/.../keyfile.json
```
- Serve the model via MLflow CLI
```shell script
mlflow models serve -m models:/rapids_airline_hyperopt_k8s/1 -p 56767
```
- Run a sample Query
```shell script
python src/rf_test/test_query.py
Classification: ON-Time
```
| 0 |
rapidsai_public_repos/cloud-ml-examples/mlflow
|
rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/MLproject
|
name: cumlrapids
docker_env:
image: rapids-mlflow-training:gcp
entry_points:
hyperopt:
parameters:
algo: {type: str, default: 'tpe'}
conda_env: {type: str, default: 'envs/conda.yaml'}
fpath: {type: str}
command: "/bin/bash src/k8s/entrypoint.sh src/rf_test/train.py --fpath={fpath} --algo={algo} --conda-env={conda_env}"
| 0 |
rapidsai_public_repos/cloud-ml-examples/mlflow
|
rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/Dockerfile.tracking
|
FROM python:3.8
RUN pip install \
mlflow \
boto3 \
gcsfs \
psycopg2-binary
COPY src/k8s/tracking_entrypoint.sh /tracking_entrypoint.sh
ENTRYPOINT [ "/bin/bash", "/tracking_entrypoint.sh" ]
| 0 |
rapidsai_public_repos/cloud-ml-examples/mlflow
|
rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/k8s_job_template.yaml
|
apiVersion: batch/v1
kind: Job
metadata:
name: "{replaced with MLflow Project name}"
namespace: default
spec:
ttlSecondsAfterFinished: 100
backoffLimit: 0
template:
spec:
volumes:
- name: gcsfs-creds
secret:
secretName: gcsfs-creds
items:
- key: keyfile.json
path: keyfile.json
containers:
- name: "{replaced with MLflow Project name}"
image: "{replaced with URI of Docker image created during Project execution}"
command: ["{replaced with MLflow Project entry point command}"]
volumeMounts:
- name: gcsfs-creds
mountPath: "/etc/secrets"
readOnly: true
resources:
limits:
nvidia.com/gpu: 1
env:
- name: PYTHONUNBUFFERED
value: "1"
- name: PYTHONIOENCODING
value: "UTF-8"
- name: GOOGLE_APPLICATION_CREDENTIALS
value: "/etc/secrets/keyfile.json"
restartPolicy: Never
| 0 |
rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment
|
rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/envs/conda.yaml
|
name: mlflow
channels:
- rapidsai
- nvidia
- conda-forge
dependencies:
- _libgcc_mutex=0.1=conda_forge
- _openmp_mutex=4.5=1_gnu
- abseil-cpp=20200225.2=he1b5a44_2
- appdirs=1.4.3=py_1
- arrow-cpp=0.17.1=py38h1234567_11_cuda
- arrow-cpp-proc=1.0.1=cuda
- asn1crypto=1.4.0=pyh9f0ad1d_0
- aws-c-common=0.4.57=he1b5a44_1
- aws-c-event-stream=0.1.6=h72b8ae1_3
- aws-checksums=0.1.9=h346380f_0
- aws-sdk-cpp=1.7.164=h69f4914_4
- bokeh=2.2.1=py38_0
- boost-cpp=1.72.0=h9359b55_3
- brotli=1.0.9=he6710b0_0
- brotlipy=0.7.0=py38h1e0a361_1000
- bzip2=1.0.8=h7b6447c_0
- c-ares=1.16.1=h7b6447c_0
- ca-certificates=2020.7.22=0
- certifi=2020.6.20=py38_0
- cffi=1.14.3=py38he30daa8_0
- chardet=3.0.4=py38h32f6830_1007
- click=7.1.2=py_0
- cloudpickle=1.6.0=py_0
- configparser=5.0.0=py_0
- cryptography=3.1.1=py38h766eaa4_0
- cudatoolkit=11.0.221=h6bb024c_0
- cudf=0.15.0=cuda_11.0_py38_g71cb8c0e0_0
- cudnn=8.0.0=cuda11.0_0
- cuml=0.15.0=cuda11.0_py38_ga3002e587_0
- cupy=7.8.0=py38hb7c6141_0
- cytoolz=0.11.0=py38h7b6447c_0
- dask=2.27.0=py_0
- dask-core=2.27.0=py_0
- dask-cudf=0.15.0=py38_g71cb8c0e0_0
- databricks-cli=0.9.1=py_0
- distributed=2.27.0=py38_0
- dlpack=0.3=he6710b0_1
- docker-py=4.3.1=py38h32f6830_0
- docker-pycreds=0.4.0=py_0
- double-conversion=3.1.5=he6710b0_1
- entrypoints=0.3=py38h32f6830_1001
- faiss-proc=1.0.0=cuda
- fastavro=1.0.0.post1=py38h7b6447c_0
- fastrlock=0.5=py38he6710b0_0
- flask=1.1.2=pyh9f0ad1d_0
- freetype=2.10.2=h5ab3b9f_0
- fsspec=0.8.0=py_0
- gflags=2.2.2=he6710b0_0
- gitdb=4.0.5=py_0
- gitpython=3.1.8=py_0
- glog=0.4.0=he6710b0_0
- gorilla=0.3.0=py_0
- grpc-cpp=1.30.2=heedbac9_0
- gunicorn=20.0.4=py38h32f6830_1
- heapdict=1.0.1=py_0
- icu=67.1=he1b5a44_0
- idna=2.10=pyh9f0ad1d_0
- itsdangerous=1.1.0=py_0
- jinja2=2.11.2=py_0
- joblib=0.16.0=py_0
- jpeg=9b=h024ee3a_2
- krb5=1.18.2=h173b8e3_0
- lcms2=2.11=h396b838_0
- ld_impl_linux-64=2.33.1=h53a641e_7
- libblas=3.8.0=17_openblas
- libcblas=3.8.0=17_openblas
- libcudf=0.15.0=cuda11.0_g71cb8c0e0_0
- libcuml=0.15.0=cuda11.0_ga3002e587_0
- libcumlprims=0.15.0=cuda11.0_gdbd0d39_0
- libcurl=7.71.1=h20c2e04_1
- libedit=3.1.20191231=h14c3975_1
- libev=4.33=h516909a_1
- libevent=2.1.10=hcdb4288_2
- libfaiss=1.6.3=h328c4c8_1_cuda
- libffi=3.3=he6710b0_2
- libgcc-ng=9.3.0=h24d8f2e_16
- libgfortran-ng=7.3.0=hdf63c60_0
- libgomp=9.3.0=h24d8f2e_16
- libhwloc=2.1.0=h3c4fd83_0
- libiconv=1.16=h516909a_0
- liblapack=3.8.0=17_openblas
- libllvm10=10.0.1=hbcb73fb_5
- libnghttp2=1.41.0=h8cfc5f6_2
- libopenblas=0.3.10=h5a2b251_0
- libpng=1.6.37=hbc83047_0
- libprotobuf=3.12.4=hd408876_0
- librmm=0.15.0=cuda11.0_g8005ca5_0
- libssh2=1.9.0=h1ba5d50_1
- libstdcxx-ng=9.1.0=hdf63c60_0
- libthrift=0.13.0=hbe8ec66_6
- libtiff=4.1.0=h2733197_1
- libwebp-base=1.1.0=h516909a_3
- libxml2=2.9.10=h68273f3_2
- llvmlite=0.34.0=py38h269e1b5_4
- locket=0.2.0=py38_1
- lz4-c=1.9.2=he6710b0_1
- mako=1.1.3=pyh9f0ad1d_0
- markupsafe=1.1.1=py38h7b6447c_0
- msgpack-python=1.0.0=py38hfd86e86_1
- nccl=2.7.8.1=h4962215_0
- ncurses=6.2=he6710b0_1
- numba=0.51.2=py38h0573a6f_1
- numpy=1.19.1=py38hbc27379_2
- olefile=0.46=py_0
- openssl=1.1.1h=h7b6447c_0
- packaging=20.4=py_0
- pandas=1.1.1=py38he6710b0_0
- parquet-cpp=1.5.1=2
- partd=1.1.0=py_0
- pillow=7.2.0=py38hb39fc2d_0
- pip=20.2.2=py38_0
- protobuf=3.12.4=py38h950e882_0
- psutil=5.7.2=py38h7b6447c_0
- pyarrow=0.17.1=py38h1234567_11_cuda
- pycparser=2.20=pyh9f0ad1d_2
- pyopenssl=19.1.0=py_1
- pyparsing=2.4.7=py_0
- pysocks=1.7.1=py38h32f6830_1
- python=3.8.5=h7579374_1
- python-dateutil=2.8.1=py_0
- python-editor=1.0.4=py_0
- python_abi=3.8=1_cp38
- pytz=2020.1=py_0
- pyyaml=5.3.1=py38h7b6447c_1
- querystring_parser=1.2.4=py_0
- re2=2020.07.06=he1b5a44_1
- readline=8.0=h7b6447c_0
- requests=2.24.0=pyh9f0ad1d_0
- rmm=0.15.0=cuda_11.0_py38_g8005ca5_0
- scikit-learn=0.23.2=py38h0573a6f_0
- scipy=1.5.2=py38h8c5af15_0
- setuptools=49.6.0=py38_0
- simplejson=3.17.2=py38h1e0a361_0
- six=1.15.0=py_0
- smmap=3.0.4=pyh9f0ad1d_0
- snappy=1.1.8=he6710b0_0
- sortedcontainers=2.2.2=py_0
- spdlog=1.8.0=hfd86e86_1
- sqlite=3.33.0=h62c20be_0
- sqlparse=0.3.1=py_0
- tabulate=0.8.7=pyh9f0ad1d_0
- tbb=2020.3=hfd86e86_0
- tblib=1.7.0=py_0
- threadpoolctl=2.1.0=pyh5ca1d4c_0
- thrift-compiler=0.13.0=hbe8ec66_6
- thrift-cpp=0.13.0=6
- tk=8.6.10=hbc83047_0
- toolz=0.10.0=py_0
- tornado=6.0.4=py38h7b6447c_1
- treelite=0.92=py38h4e709cc_2
- typing_extensions=3.7.4.3=py_0
- ucx=1.8.1+g6b29558=cuda11.0_0
- ucx-py=0.15.0+g6b29558=py38_0
- ucx-proc=*=gpu
- urllib3=1.25.10=py_0
- websocket-client=0.57.0=py38h32f6830_2
- werkzeug=1.0.1=pyh9f0ad1d_0
- wheel=0.35.1=py_0
- xz=5.2.5=h7b6447c_0
- yaml=0.2.5=h7b6447c_0
- zict=2.0.0=py_0
- zlib=1.2.11=h7b6447c_3
- zstd=1.4.5=h9ceee32_0
- pip:
- aiohttp==3.6.2
- alembic==1.4.1
- argon2-cffi==20.1.0
- async-generator==1.10
- async-timeout==3.0.1
- attrs==20.2.0
- azure-core==1.8.1
- azure-storage-blob==12.5.0
- backcall==0.2.0
- bleach==3.2.1
- boto3==1.15.12
- botocore==1.18.12
- cachetools==4.1.1
- decorator==4.4.2
- defusedxml==0.6.0
- future==0.18.2
- gcsfs==0.7.1
- google-auth==1.22.1
- google-auth-oauthlib==0.4.1
- hyperopt==0.2.4
- ipykernel==5.3.4
- ipython==7.18.1
- ipython-genutils==0.2.0
- isodate==0.6.0
- jedi==0.17.2
- jmespath==0.10.0
- json5==0.9.5
- jsonschema==3.2.0
- jupyter-client==6.1.7
- jupyter-core==4.6.3
- jupyterlab==2.2.8
- jupyterlab-pygments==0.1.2
- jupyterlab-server==1.2.0
- kubernetes==11.0.0
- mistune==0.8.4
- mlflow==1.11.0
- msrest==0.6.19
- multidict==4.7.6
- nbclient==0.5.0
- nbconvert==6.0.7
- nbformat==5.0.7
- nest-asyncio==1.4.1
- networkx==2.5
- notebook==6.1.4
- oauthlib==3.1.0
- pandocfilters==1.4.2
- parso==0.7.1
- pexpect==4.8.0
- pickleshare==0.7.5
- prometheus-client==0.8.0
- prometheus-flask-exporter==0.18.0
- prompt-toolkit==3.0.7
- psycopg2-binary==2.8.6
- ptyprocess==0.6.0
- pyasn1==0.4.8
- pyasn1-modules==0.2.8
- pygments==2.7.1
- pyrsistent==0.17.3
- pyzmq==19.0.2
- requests-oauthlib==1.3.0
- rsa==4.6
- s3transfer==0.3.3
- send2trash==1.5.0
- sqlalchemy==1.3.13
- terminado==0.9.1
- testpath==0.4.4
- tqdm==4.50.0
- traitlets==5.0.4
- treelite-runtime==0.92
- wcwidth==0.2.5
- webencodings==0.5.1
- yarl==1.6.0
| 0 |
rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/helm
|
rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/helm/mlflow-tracking-server/Chart.yaml
|
apiVersion: v2
name: mlflow-tracking-server
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
appVersion: 1.16.0
| 0 |
rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/helm
|
rapidsai_public_repos/cloud-ml-examples/mlflow/docker_environment/helm/mlflow-tracking-server/.helmignore
|
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.