Lesson 05: Jetson Docker Containers
5.1 Docker
What is Docker?
Docker is a platform that enables developers to build, package, and deploy applications in containers. Containers are lightweight, portable, and self-sufficient units that include everything needed to run an application, such as the code, runtime, libraries, and dependencies. This ensures that the application runs consistently across different environments.
Why Use Docker on Jetson Orin Nano?
The NVIDIA Jetson Orin Nano is a powerful edge AI platform designed to deploy machine learning and AI applications. Docker complements this by providing a flexible, consistent, and scalable environment for development and deployment. Here’s why Docker stands out:
- Isolation and Consistency: Docker containers encapsulate all dependencies, ensuring your AI applications run consistently across various environments without conflicts.
- Portability: Containers can be easily moved between different devices, making it simple to develop on one machine and deploy on the Jetson Orin Nano.
- Resource Efficiency: Docker containers share the host system’s kernel, making them more lightweight and efficient than full virtual machines.
- GPU Acceleration: Docker supports NVIDIA GPU acceleration, which is crucial for AI and machine learning workloads. Using NVIDIA Docker, you can leverage the GPU capabilities of the Jetson Orin Nano for enhanced performance.
- Scalability: Docker makes it easy to scale your applications by deploying multiple containers and managing them with orchestration tools like Kubernetes.
Why Docker is Better Than Python Virtual Environment and Conda for AI/ML on Jetson Orin Nano
- Isolation: Docker provides a higher isolation level than Python virtual environments and Conda. This prevents conflicts between different projects and dependencies.
- Comprehensive Environment: While Python virtual environments and Conda are excellent for managing Python and R dependencies, Docker can encapsulate an entire application stack, including non-Python dependencies, system tools, and configurations.
- Deployment: Docker containers can be deployed consistently across different environments, making moving from development to production more accessible. This is more challenging with virtual environments and Conda, which are more suited to development than deployment.
- GPU Support: Docker's integration with NVIDIA Docker allows seamless use of the Jetson Orin Nano's GPU capabilities, which are not natively supported by Python virtual environments and require additional configuration in Conda.
- Automation and CI/CD: Docker integrates well with continuous integration and continuous deployment (CI/CD) pipelines, facilitating automated testing and deployment of AI/ML models.
Getting Started with Docker on Jetson Orin Nano
- Check Docker Version: The Jetpack system already has Docker installed. You can verify the version using the following commands:
# Show all Docker information sudo docker info # Disply docker version sudo docker version # Display short Docker version sudo docker --version # Display Docker system folder sudo docker info | grep -i root
- Install NVIDIA Container Toolkit: This enables GPU acceleration for your Docker containers.
sudo apt-get update sudo apt-get install -y nvidia-container-toolkit sudo systemctl restart docker
- Run a Docker Container: Use an NVIDIA CUDA base image to leverage GPU acceleration.
sudo docker run --runtime=nvidia --gpus all nvidia/cuda:11.0-base nvidia-smi
- Develop and Deploy: Create Dockerfiles for your AI/ML applications, build Docker images, and run your containers on the Jetson Orin Nano.
5.2 Configure Docker Containers
Install the latest version of JetPack 6 on Orin. The following versions are supported:
- JetPack 4.6.1+ (>= L4T R32.7.1) for Jano/TX1/TX2
- JatPack 5.1+ (>= L4T R35.2.1) for Xavier
- JetPack 6.0 GA (L4T R36.3.0) for Orin
Type the jtop
or jetson_release
command to check your JetPack version.
jtop
# or
jetson_release
You can download a newer version of Jetpack from the Jetpack SDK web.
To check the version of the Docker Container that the system installed:
sudo dpkg --get-selections | grep nvidia
libnvidia-container-tools install
libnvidia-container1:arm64 install
nvidia-container install
nvidia-container-toolkit install
nvidia-container-toolkit-base install
sudo docker info | grep nvidia
Runtimes: io.containerd.runc.v2 nvidia runc
Default Runtime: nvidia
Clone the Repo
This will download and install jetson-containers utilities:
sudo apt-get update && sudo apt-get install git python3-pip
git clone https://github.com/dusty-nv/jetson-containers
source ./venv/bin/activate
cd jetson-containers/
pip3 install -r requirements.txt
pip install wheel
Setup Docker Default Runtime
If you are building containers, you need to set Docker's default-runtime to nvidia, so that the NVCC compiler and GPU are available during docker build operations. Add "default-runtime": "nvidia" to your /etc/docker/daemon.json configuration file before attempting to build the containers:
sudo edit /etc/docker/daemon.json
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
Then restart the Docker service (or reboot your system before proceeding). You can then confirm the changes by looking under docker info
sudo systemctl restart docker
sudo docker info | grep "Default Runtime"
Default Runtime: nvidia
Add to Docker group
To avoid using sudo with Docker commands, add your user to the Docker group:
sudo usermod -aG docker ${USER}
groups
airsupply adm cdrom sudo audio dip video plugdev render i2c lpadmin gdm sambashare weston-launch gpio jtop
Reboot the system
Relocating Docker Data Root
Containers can take up a lot of disk space. If you have external storage available, you are advised to relocate your Docker container cache to the larger drive (NVME is preferred if possible). If it is not already, format your drive as ext4 so it is mounted at boot (i.e. it should be in /etc/fstab). If it's not automatically mounted at boot before the Docker daemon starts, the directory will not exist for Docker to use.
Copy the existing Docker cache from /var/lib/docker to a directory on your drive of choice (in this case, /ssd/docker):
sudo cp -r /var/lib/docker /ssd/docker
5.3 Nvidia NGC Containers
Finding Suitable Containers from NGC:
Visit Nvidia NGC (http://ngc.nvidia.com/), where you will see a screen like the one below. Enter the keyword "l4t" in the search bar, and a list of images that can run on Jetson will be displayed.
Currently, nearly 20 container images are available, categorized into the following six categories:
- Base Category:
- NVIDIA L4T Base:
https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-base - NVIDIA L4T CUDA:
https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-cuda - NVIDIA Container Toolkit
https://catalog.ngc.nvidia.com/orgs/nvidia/teams/k8s/containers/container-toolkit
- NVIDIA L4T Base:
- Deep Learning Category:
- NVIDIA L4T ML: Comprehensive deep learning development environment
https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-ml - NVIDIA L4T PyTorch:
https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch - NVIDIA L4T TensorFlow:
https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-tensorflow - NVIDIA L4T TensorRT:
https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-tensorrt
- NVIDIA L4T ML: Comprehensive deep learning development environment
- Vision Category:
- DeepStream-l4t
https://catalog.ngc.nvidia.com/orgs/nvidia/containers/deepstream-l4t - DeepStream L4T - Intelligent Video Analytics Demo
https://catalog.ngc.nvidia.com/orgs/nvidia/helm-charts/video-analytics-demo-l4t - DeepStream People Detection Demo on Jetson
https://catalog.ngc.nvidia.com/orgs/nvidia/containers/deepstream-peopledetection - Gaze Demo for Jetson/L4T
https://catalog.ngc.nvidia.com/orgs/nvidia/containers/jetson-gaze - Pose Demo for Jetson/L4T
https://catalog.ngc.nvidia.com/orgs/nvidia/containers/jetson-pose
- DeepStream-l4t
- Conversational Category:
- Voice Demo for Jetson/L4T
https://catalog.ngc.nvidia.com/orgs/nvidia/containers/jetson-voice - Riva Speech Skills
https://catalog.ngc.nvidia.com/orgs/nvidia/teams/riva/containers/riva-speech
- Voice Demo for Jetson/L4T
- Educational Category:
- DLI Getting Started with AI on Jetson Nano
https://catalog.ngc.nvidia.com/orgs/nvidia/teams/dli/containers/dli-nano-ai - DLI Building Video AI Applications at the Edge on Jetson Nano
https://catalog.ngc.nvidia.com/orgs/nvidia/teams/dli/containers/dli-nano-deepstream
- DLI Getting Started with AI on Jetson Nano
- Medical Category: Application container images dedicated to Clara AGX.
Instructions for Downloading and Verifying Image Files:
- Click the "Get Container" button at the top right corner of the page. This will display a list of currently available image file versions. It is best to choose an image that matches the current JetPack version. For example, for the Jetson Orin Developer Kit installed with JetPack 6.0 GA, the L4T version is 36.3.0. Use the following command to download the r36.3.0-py3 image:
docker pull nvcr.io/nvidia/l4t-ml:r36.2.0-py3
- Once the download is complete, you can check it with the following command:
docker images
- If the following information appears, the download is complete:
REPOSITORY TAG IMAGE ID CREATED SIZE nvcr.io/nvidia/l4t-ml r34.1.1-py3 93c715e8751b 6 weeks ago 16.2GB This method can be used for any L4T version image file.
5.4 Nvidia NGC Resources
The first step to using these resources is to create an NGC account. This account is separate from the NVIDIA Developer account, so you need to apply for it separately. This guide will walk you through creating an account and obtaining an API key, allowing you to easily access NGC content.
- Create an NGC Account:
- Go to NGC NVIDIA, where you will be directed to the CATALOG screen.
- Click “Welcome Guest” at the top right corner, then select “Sign in/Sign Up” from the dropdown menu.
- Enter your email address on the next screen and click “Continue.”
- Fill in the password you want to set. The password must include uppercase letters, lowercase letters, and numbers. Complete the “I am not a robot” verification step.
- A confirmation email will be sent to the email address you provided. Complete the registration by following the instructions in the email.
Once the account is successfully created, you can log in to the NGC center.
Obtain an NGC API Key:
- After logging into NGC, you will see your login name and a hashed string at the top right corner. Click on your username and select “Setup” from the dropdown menu.
- In the Setup options, click on “Get API Key” on the left side.
- Click “Generate API KEY” on the top right corner of the screen, and confirm the action by clicking “Confirm.”
- An 85-character string will appear at the bottom of the screen.
Since the API key content is only visible at the creation time and cannot be retrieved later, copy and save it securely. This key is crucial for the entire training process and final inference. If forgotten, you must generate a new one, which might require retraining your models.
Usage:
Use your API key to log in to the NGC registry by entering the following command and following the prompts:
- NGC CLI:
ngc config set
- Docker:
For the username, enter '$oauthtoken' exactly as shown. It is a special authentication key for all users.
docker login nvcr.io Username: $oauthtoken Password: YourAPIKey
API Key generated successfully. This is the only time your API Key will be displayed. Keep your API Key secret. Do not share it or store it where others can see or copy it. - Docker Login to NGC:
After generating the API key, the final step is to log in to NGC from your Jetson Orin Nano Developer Kit. This allows full access to NGC resources. Use the following commands:export KEY=YourAPIKey docker login -u '$oauthtoken' --password-stdin nvcr.io <<< $KEY
You will see a confirmation screenshot indicating a successful login if entered correctly! This login process only needs to be done once. You can now start using all the resources available on NGC.