
Solving GPU Challenges in CI/CD Pipelines with Dagger
January 28, 2025
Jan 28, 2025


GPUs are a critical tool for tasks like machine learning, data processing, and high-performance computing. However, integrating GPUs into CI/CD pipelines presents unique challenges that many teams struggle to address. GPUs are expensive, cannot easily be shared across containers, and are often underutilized in workflows that don’t consistently require their processing power.
At Dagger, we’ve been listening to the community and working on ways to simplify GPU integration. With the latest updates to our documentation, we’re making it easier to use Fly.io and Lambda Labs to run GPU-enabled pipelines on-demand—saving you time, cost, and headaches.
Why GPUs and Pipelines Don’t Play Well Together
The integration of GPUs in CI/CD pipelines is far from straightforward. Here’s why:
Cost and Accessibility: GPUs are resource-intensive and expensive. Unlike CPUs, which can be virtualized and shared efficiently, GPUs are bound to specific hardware. Once a GPU is allocated to a container, it can’t easily be shared with other containers.
Limited Use Cases: Not every step in a pipeline needs a GPU. For example, tasks like linting or running unit tests can easily execute on CPUs, but steps involving machine learning inference or rendering demand GPU power. This imbalance often leads to underutilized GPU resources.
Infrastructure Complexity: Setting up environments with GPU support, ensuring proper drivers are installed, and managing secure connections between local and remote resources add significant complexity to CI/CD workflows.
How Dagger Helps Simplify the use of a GPU in a Pipeline
With Dagger, you can offload GPU-specific tasks to remote runners like Fly.io or Lambda Labs, while keeping the rest of your pipeline local. Here’s how it works:
On-Demand GPU Usage: Deploy GPU resources only when needed for specific steps in your pipeline, avoiding the cost of running idle GPUs.
Persistent Caching: Looking at the cost of accessing a GPU, you don’t want to re-run the entire pipeline every time. Dagger transparently runs the part of the pipeline that changed.
Infrastructure Agnosticity: Everything in a Dagger pipeline runs inside a container, so you don’t have to deal with dependency management no matter if the pipeline runs locally on a remote infrastructure.
Check out the new documentation for GPU Support.
How to Leverage a Remote Nvidia GPU
To see GPU integration in action, check out the demo below, where Sam, Dagger’s co-founder, walks through a practical example:
Running a local pipeline that offloads a step requiring GPU inference to Fly.io.
Configure the pipeline to deploy a Nvidia GPU on Fly.io, run inference using Ollama, and then clean up the GPU instance afterward.
Persisting data across runs using caching and monitoring execution with Dagger Cloud.
What’s Next?
We hope the community appreciates the new example, and we look forward to documenting more. Additionally, we’re exploring ways to dynamically allocate GPUs within live pipelines, allowing users to integrate GPU steps seamlessly without breaking their existing workflows.
If you’re already working with GPUs in your pipelines—or want to get started— we hope the resources above will help you hit the ground running. Dive into the examples, try out the tools, and share your feedback!
Have ideas or questions? Let us know in the comments or join the conversation on our Discord channel.
GPUs are a critical tool for tasks like machine learning, data processing, and high-performance computing. However, integrating GPUs into CI/CD pipelines presents unique challenges that many teams struggle to address. GPUs are expensive, cannot easily be shared across containers, and are often underutilized in workflows that don’t consistently require their processing power.
At Dagger, we’ve been listening to the community and working on ways to simplify GPU integration. With the latest updates to our documentation, we’re making it easier to use Fly.io and Lambda Labs to run GPU-enabled pipelines on-demand—saving you time, cost, and headaches.
Why GPUs and Pipelines Don’t Play Well Together
The integration of GPUs in CI/CD pipelines is far from straightforward. Here’s why:
Cost and Accessibility: GPUs are resource-intensive and expensive. Unlike CPUs, which can be virtualized and shared efficiently, GPUs are bound to specific hardware. Once a GPU is allocated to a container, it can’t easily be shared with other containers.
Limited Use Cases: Not every step in a pipeline needs a GPU. For example, tasks like linting or running unit tests can easily execute on CPUs, but steps involving machine learning inference or rendering demand GPU power. This imbalance often leads to underutilized GPU resources.
Infrastructure Complexity: Setting up environments with GPU support, ensuring proper drivers are installed, and managing secure connections between local and remote resources add significant complexity to CI/CD workflows.
How Dagger Helps Simplify the use of a GPU in a Pipeline
With Dagger, you can offload GPU-specific tasks to remote runners like Fly.io or Lambda Labs, while keeping the rest of your pipeline local. Here’s how it works:
On-Demand GPU Usage: Deploy GPU resources only when needed for specific steps in your pipeline, avoiding the cost of running idle GPUs.
Persistent Caching: Looking at the cost of accessing a GPU, you don’t want to re-run the entire pipeline every time. Dagger transparently runs the part of the pipeline that changed.
Infrastructure Agnosticity: Everything in a Dagger pipeline runs inside a container, so you don’t have to deal with dependency management no matter if the pipeline runs locally on a remote infrastructure.
Check out the new documentation for GPU Support.
How to Leverage a Remote Nvidia GPU
To see GPU integration in action, check out the demo below, where Sam, Dagger’s co-founder, walks through a practical example:
Running a local pipeline that offloads a step requiring GPU inference to Fly.io.
Configure the pipeline to deploy a Nvidia GPU on Fly.io, run inference using Ollama, and then clean up the GPU instance afterward.
Persisting data across runs using caching and monitoring execution with Dagger Cloud.
What’s Next?
We hope the community appreciates the new example, and we look forward to documenting more. Additionally, we’re exploring ways to dynamically allocate GPUs within live pipelines, allowing users to integrate GPU steps seamlessly without breaking their existing workflows.
If you’re already working with GPUs in your pipelines—or want to get started— we hope the resources above will help you hit the ground running. Dive into the examples, try out the tools, and share your feedback!
Have ideas or questions? Let us know in the comments or join the conversation on our Discord channel.
GPUs are a critical tool for tasks like machine learning, data processing, and high-performance computing. However, integrating GPUs into CI/CD pipelines presents unique challenges that many teams struggle to address. GPUs are expensive, cannot easily be shared across containers, and are often underutilized in workflows that don’t consistently require their processing power.
At Dagger, we’ve been listening to the community and working on ways to simplify GPU integration. With the latest updates to our documentation, we’re making it easier to use Fly.io and Lambda Labs to run GPU-enabled pipelines on-demand—saving you time, cost, and headaches.
Why GPUs and Pipelines Don’t Play Well Together
The integration of GPUs in CI/CD pipelines is far from straightforward. Here’s why:
Cost and Accessibility: GPUs are resource-intensive and expensive. Unlike CPUs, which can be virtualized and shared efficiently, GPUs are bound to specific hardware. Once a GPU is allocated to a container, it can’t easily be shared with other containers.
Limited Use Cases: Not every step in a pipeline needs a GPU. For example, tasks like linting or running unit tests can easily execute on CPUs, but steps involving machine learning inference or rendering demand GPU power. This imbalance often leads to underutilized GPU resources.
Infrastructure Complexity: Setting up environments with GPU support, ensuring proper drivers are installed, and managing secure connections between local and remote resources add significant complexity to CI/CD workflows.
How Dagger Helps Simplify the use of a GPU in a Pipeline
With Dagger, you can offload GPU-specific tasks to remote runners like Fly.io or Lambda Labs, while keeping the rest of your pipeline local. Here’s how it works:
On-Demand GPU Usage: Deploy GPU resources only when needed for specific steps in your pipeline, avoiding the cost of running idle GPUs.
Persistent Caching: Looking at the cost of accessing a GPU, you don’t want to re-run the entire pipeline every time. Dagger transparently runs the part of the pipeline that changed.
Infrastructure Agnosticity: Everything in a Dagger pipeline runs inside a container, so you don’t have to deal with dependency management no matter if the pipeline runs locally on a remote infrastructure.
Check out the new documentation for GPU Support.
How to Leverage a Remote Nvidia GPU
To see GPU integration in action, check out the demo below, where Sam, Dagger’s co-founder, walks through a practical example:
Running a local pipeline that offloads a step requiring GPU inference to Fly.io.
Configure the pipeline to deploy a Nvidia GPU on Fly.io, run inference using Ollama, and then clean up the GPU instance afterward.
Persisting data across runs using caching and monitoring execution with Dagger Cloud.
What’s Next?
We hope the community appreciates the new example, and we look forward to documenting more. Additionally, we’re exploring ways to dynamically allocate GPUs within live pipelines, allowing users to integrate GPU steps seamlessly without breaking their existing workflows.
If you’re already working with GPUs in your pipelines—or want to get started— we hope the resources above will help you hit the ground running. Dive into the examples, try out the tools, and share your feedback!
Have ideas or questions? Let us know in the comments or join the conversation on our Discord channel.