Using Dockerfiles with Dagger
May 9, 2023
Jeremy Adams
None
Tutorial
Most organizations have Dockerfiles somewhere. Some have a lot of them. They might be managed by another team, and maybe your team only interacts with the built images, but chances are that if you’re thinking about CI, you’ve thought about writing, maintaining, simplifying, or getting rid of your Dockerfiles.
If you’re using Dagger to turn your pipelines into code via a Dagger SDK in Go, Python, or Node.js (or via the GraphQL API directly), then you have a couple of options on how to handle your Dockerfiles.
Keep your existing Dockerfiles and drive Docker builds from Dagger
Turn your Dockerfiles into Dagger code
Let’s examine each of these.
Build from your existing Dockerfiles
You might want to do this if you have a deep investment in Dockerfiles and they are an important interface for developers or platform engineers. You might also want to focus on orchestrating the rest of your pipeline first and tackling your working Dockerfiles later.
It’s easy to read in a directory of content that includes a Dockerfile and produce an image with Dagger. You can also initialize a new Container in Dagger from a Directory with a Dockerfile in it.
Python:
import anyio
import sys
import dagger
async def main():
async with dagger.Connection(dagger.Config(log_output=sys.stderr)) as client:
build = client.host().directory(".").docker_build()
await build.publish("jeremyatdockerhub/myexample:latest")
anyio.run(main)
Read more about building Dockerfiles in the Python SDK Reference.
Go:
package main
import (
"context"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
client, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
defer client.Close()
_, err = client.Host().Directory(".").
DockerBuild().
Publish(ctx, "jeremyatdockerhub/myexample:latest")
if err != nil {
panic(err)
}
}
Read more about building Dockerfiles in the Go SDK Reference.
Node.js:
import { connect } from "@dagger.io/dagger"
connect(async (client) => {
const imageRef = await client.host().directory(".").
dockerBuild().
publish("jeremyatdockerhub/myexample:latest")
}, { LogOutput: process.stderr })
Read more about building Dockerfiles in the Node.js SDK Reference.
GraphQL API:
Read more about building Dockerfiles in the API Reference.
Turn your Dockerfiles into Dagger code
Whether your Dockerfiles are super simple (and thus trivial to turn into Dagger SDK code), or super gnarly, converting your existing Dockerfiles into Dagger code means you can take full advantage of both Dagger and your language of choice to create testable, extensible builds and pipelines. It might be that Dockerfile syntax is a mystery to many of your engineers and thus folks are reluctant to touch them, or that you’ve outgrown the Dockerfile format and need more control that a full programming language can offer including types, functions, loops, and conditionals.
If you’re familiar with Dockerfiles or the concepts behind them, you’ll find the Dagger API implemented by the SDKs has familiar constructs for working with Containers, Directories, Files, Git, Environment Variables, Secrets, etc.
Luckily we have an excellent blog post and technical guide that illustrates the approach in Go, Python and Node.js. It shows you how to
Understand and break down a Dockerfile into steps
Create a Dagger client
Write a Dagger pipeline to replicate the Dockerfile build process, taking advantage of functions and the Dagger API implemented by your SDK of choice to:
Configure a container with all required dependencies and environment variables for an application
Download and build the application source code in the container
Set the container entrypoint
Publish the built container image to Docker Hub
Test the Dagger pipeline locally
Mapping Dockerfiles to Dagger
To help with mapping your Dockerfile to Dagger, here’s a handy table of some of the common Dockerfile commands and their Dagger equivalents. It actually helps to think of Dagger as much more than a Dockerfile equivalent, because it is! Some things are more akin to what you’d do with docker build or docker run or buildx. For example mounting directories, cache mounts, and build args. A lot of this functionality can be found in the docs under the Container type, so that’s a great place to browse.
<TABLE>
Conclusion
Whether you need to keep your existing Dockerfiles around for a while or are ready to turn them into Dagger SDK code, you’ve now got the tools to incorporate your existing Dockerfiles into your Dagger pipelines. As always, we’re available in Discord if you need help!
Most organizations have Dockerfiles somewhere. Some have a lot of them. They might be managed by another team, and maybe your team only interacts with the built images, but chances are that if you’re thinking about CI, you’ve thought about writing, maintaining, simplifying, or getting rid of your Dockerfiles.
If you’re using Dagger to turn your pipelines into code via a Dagger SDK in Go, Python, or Node.js (or via the GraphQL API directly), then you have a couple of options on how to handle your Dockerfiles.
Keep your existing Dockerfiles and drive Docker builds from Dagger
Turn your Dockerfiles into Dagger code
Let’s examine each of these.
Build from your existing Dockerfiles
You might want to do this if you have a deep investment in Dockerfiles and they are an important interface for developers or platform engineers. You might also want to focus on orchestrating the rest of your pipeline first and tackling your working Dockerfiles later.
It’s easy to read in a directory of content that includes a Dockerfile and produce an image with Dagger. You can also initialize a new Container in Dagger from a Directory with a Dockerfile in it.
Python:
import anyio
import sys
import dagger
async def main():
async with dagger.Connection(dagger.Config(log_output=sys.stderr)) as client:
build = client.host().directory(".").docker_build()
await build.publish("jeremyatdockerhub/myexample:latest")
anyio.run(main)
Read more about building Dockerfiles in the Python SDK Reference.
Go:
package main
import (
"context"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
client, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
defer client.Close()
_, err = client.Host().Directory(".").
DockerBuild().
Publish(ctx, "jeremyatdockerhub/myexample:latest")
if err != nil {
panic(err)
}
}
Read more about building Dockerfiles in the Go SDK Reference.
Node.js:
import { connect } from "@dagger.io/dagger"
connect(async (client) => {
const imageRef = await client.host().directory(".").
dockerBuild().
publish("jeremyatdockerhub/myexample:latest")
}, { LogOutput: process.stderr })
Read more about building Dockerfiles in the Node.js SDK Reference.
GraphQL API:
Read more about building Dockerfiles in the API Reference.
Turn your Dockerfiles into Dagger code
Whether your Dockerfiles are super simple (and thus trivial to turn into Dagger SDK code), or super gnarly, converting your existing Dockerfiles into Dagger code means you can take full advantage of both Dagger and your language of choice to create testable, extensible builds and pipelines. It might be that Dockerfile syntax is a mystery to many of your engineers and thus folks are reluctant to touch them, or that you’ve outgrown the Dockerfile format and need more control that a full programming language can offer including types, functions, loops, and conditionals.
If you’re familiar with Dockerfiles or the concepts behind them, you’ll find the Dagger API implemented by the SDKs has familiar constructs for working with Containers, Directories, Files, Git, Environment Variables, Secrets, etc.
Luckily we have an excellent blog post and technical guide that illustrates the approach in Go, Python and Node.js. It shows you how to
Understand and break down a Dockerfile into steps
Create a Dagger client
Write a Dagger pipeline to replicate the Dockerfile build process, taking advantage of functions and the Dagger API implemented by your SDK of choice to:
Configure a container with all required dependencies and environment variables for an application
Download and build the application source code in the container
Set the container entrypoint
Publish the built container image to Docker Hub
Test the Dagger pipeline locally
Mapping Dockerfiles to Dagger
To help with mapping your Dockerfile to Dagger, here’s a handy table of some of the common Dockerfile commands and their Dagger equivalents. It actually helps to think of Dagger as much more than a Dockerfile equivalent, because it is! Some things are more akin to what you’d do with docker build or docker run or buildx. For example mounting directories, cache mounts, and build args. A lot of this functionality can be found in the docs under the Container type, so that’s a great place to browse.
<TABLE>
Conclusion
Whether you need to keep your existing Dockerfiles around for a while or are ready to turn them into Dagger SDK code, you’ve now got the tools to incorporate your existing Dockerfiles into your Dagger pipelines. As always, we’re available in Discord if you need help!
Most organizations have Dockerfiles somewhere. Some have a lot of them. They might be managed by another team, and maybe your team only interacts with the built images, but chances are that if you’re thinking about CI, you’ve thought about writing, maintaining, simplifying, or getting rid of your Dockerfiles.
If you’re using Dagger to turn your pipelines into code via a Dagger SDK in Go, Python, or Node.js (or via the GraphQL API directly), then you have a couple of options on how to handle your Dockerfiles.
Keep your existing Dockerfiles and drive Docker builds from Dagger
Turn your Dockerfiles into Dagger code
Let’s examine each of these.
Build from your existing Dockerfiles
You might want to do this if you have a deep investment in Dockerfiles and they are an important interface for developers or platform engineers. You might also want to focus on orchestrating the rest of your pipeline first and tackling your working Dockerfiles later.
It’s easy to read in a directory of content that includes a Dockerfile and produce an image with Dagger. You can also initialize a new Container in Dagger from a Directory with a Dockerfile in it.
Python:
import anyio
import sys
import dagger
async def main():
async with dagger.Connection(dagger.Config(log_output=sys.stderr)) as client:
build = client.host().directory(".").docker_build()
await build.publish("jeremyatdockerhub/myexample:latest")
anyio.run(main)
Read more about building Dockerfiles in the Python SDK Reference.
Go:
package main
import (
"context"
"os"
"dagger.io/dagger"
)
func main() {
ctx := context.Background()
client, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
defer client.Close()
_, err = client.Host().Directory(".").
DockerBuild().
Publish(ctx, "jeremyatdockerhub/myexample:latest")
if err != nil {
panic(err)
}
}
Read more about building Dockerfiles in the Go SDK Reference.
Node.js:
import { connect } from "@dagger.io/dagger"
connect(async (client) => {
const imageRef = await client.host().directory(".").
dockerBuild().
publish("jeremyatdockerhub/myexample:latest")
}, { LogOutput: process.stderr })
Read more about building Dockerfiles in the Node.js SDK Reference.
GraphQL API:
Read more about building Dockerfiles in the API Reference.
Turn your Dockerfiles into Dagger code
Whether your Dockerfiles are super simple (and thus trivial to turn into Dagger SDK code), or super gnarly, converting your existing Dockerfiles into Dagger code means you can take full advantage of both Dagger and your language of choice to create testable, extensible builds and pipelines. It might be that Dockerfile syntax is a mystery to many of your engineers and thus folks are reluctant to touch them, or that you’ve outgrown the Dockerfile format and need more control that a full programming language can offer including types, functions, loops, and conditionals.
If you’re familiar with Dockerfiles or the concepts behind them, you’ll find the Dagger API implemented by the SDKs has familiar constructs for working with Containers, Directories, Files, Git, Environment Variables, Secrets, etc.
Luckily we have an excellent blog post and technical guide that illustrates the approach in Go, Python and Node.js. It shows you how to
Understand and break down a Dockerfile into steps
Create a Dagger client
Write a Dagger pipeline to replicate the Dockerfile build process, taking advantage of functions and the Dagger API implemented by your SDK of choice to:
Configure a container with all required dependencies and environment variables for an application
Download and build the application source code in the container
Set the container entrypoint
Publish the built container image to Docker Hub
Test the Dagger pipeline locally
Mapping Dockerfiles to Dagger
To help with mapping your Dockerfile to Dagger, here’s a handy table of some of the common Dockerfile commands and their Dagger equivalents. It actually helps to think of Dagger as much more than a Dockerfile equivalent, because it is! Some things are more akin to what you’d do with docker build or docker run or buildx. For example mounting directories, cache mounts, and build args. A lot of this functionality can be found in the docs under the Container type, so that’s a great place to browse.
<TABLE>
Conclusion
Whether you need to keep your existing Dockerfiles around for a while or are ready to turn them into Dagger SDK code, you’ve now got the tools to incorporate your existing Dockerfiles into your Dagger pipelines. As always, we’re available in Discord if you need help!