英文:
How to pass `--gpus all` option to Docker with Go SDK?
问题
我已经看到了如何执行一些基本命令,比如运行容器、拉取镜像、列出镜像等等,可以从SDK示例中找到相关信息。
我正在一个需要在容器内使用GPU的项目中工作。
我的系统有GPU,我已经安装了驱动程序,也安装了nvidia-container-runtime
。
如果我们暂时不考虑Go SDK,我可以运行以下命令来获取主机系统上的nvidia-smi
输出:
docker run -it --rm --gpus all nvidia/cuda:10.0-base nvidia-smi
我需要通过SDK来实现这个功能。以下是一个可以开始的代码。这段代码打印出"hello world"。但实际上,我将在那个位置运行nvidia-smi
命令:
package main
import (
"context"
"os"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/client"
"github.com/docker/docker/pkg/stdcopy"
)
func main() {
ctx := context.Background()
cli, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
if err != nil {
panic(err)
}
RunContainer(ctx, cli)
}
func RunContainer(ctx context.Context, cli *client.Client) {
reader, err := cli.ImagePull(ctx, "nvidia/cuda:10.0-base", types.ImagePullOptions{})
if err != nil {
panic(err)
}
defer reader.Close()
resp, err := cli.ContainerCreate(ctx, &container.Config{
Image: "nvidia/cuda:10.0-base",
Cmd: []string{"echo", "hello world"},
}, nil, nil, nil, "")
if err != nil {
panic(err)
}
if err := cli.ContainerStart(ctx, resp.ID, types.ContainerStartOptions{}); err != nil {
panic(err)
}
statusCh, errCh := cli.ContainerWait(ctx, resp.ID, container.WaitConditionNotRunning)
select {
case err := <-errCh:
if err != nil {
panic(err)
}
case <-statusCh:
}
out, err := cli.ContainerLogs(ctx, resp.ID, types.ContainerLogsOptions{ShowStdout: true})
if err != nil {
panic(err)
}
stdcopy.StdCopy(os.Stdout, os.Stderr, out)
}
英文:
I have seen how to do some basic commands such as running a container, pulling images, listing images, etc from the SDK examples.
I am working on a project where I need to use the GPU from within the container.
My system has GPU, I have installed the drivers, and I have also installed the nvidia-container-runtime
.
If we remove Go SDK from the scene for a moment, I can run the following command to get the nvidia-smi
output on my host system:
docker run -it --rm --gpus all nvidia/cuda:10.0-base nvidia-smi
I have to do this via the SDK. Here is the code to start with. This code prints "hello world". But in actual I will be running nvidia-smi
command at that place:
package main
import (
"context"
"os"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/client"
"github.com/docker/docker/pkg/stdcopy"
)
func main() {
ctx := context.Background()
cli, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
if err != nil {
panic(err)
}
RunContainer(ctx, cli)
}
func RunContainer(ctx context.Context, cli *client.Client) {
reader, err := cli.ImagePull(ctx, "nvidia/cuda:10.0-base", types.ImagePullOptions{})
if err != nil {
panic(err)
}
defer reader.Close()
// io.Copy(os.Stdout, reader)
resp, err := cli.ContainerCreate(ctx, &container.Config{
Image: "nvidia/cuda:10.0-base",
Cmd: []string{"echo", "hello world"},
// Tty: false,
}, nil, nil, nil, "")
if err != nil {
panic(err)
}
if err := cli.ContainerStart(ctx, resp.ID, types.ContainerStartOptions{}); err != nil {
panic(err)
}
statusCh, errCh := cli.ContainerWait(ctx, resp.ID, container.WaitConditionNotRunning)
select {
case err := <-errCh:
if err != nil {
panic(err)
}
case <-statusCh:
}
out, err := cli.ContainerLogs(ctx, resp.ID, types.ContainerLogsOptions{ShowStdout: true})
if err != nil {
panic(err)
}
stdcopy.StdCopy(os.Stdout, os.Stderr, out)
}
答案1
得分: 3
请注意,我是一个语言模型,无法直接访问互联网或查看外部链接。但是,你可以将你想要翻译的代码或文本粘贴在这里,我会尽力帮助你进行翻译。
英文:
import "github.com/docker/cli/opts"
// ...
gpuOpts := opts.GpuOpts{}
gpuOpts.Set("all")
resp, err := cli.ContainerCreate(ctx, &container.Config{
Image: "nvidia/cuda:10.0-base",
Cmd: []string{"echo", "hello world"},
// Tty: false,
}, &container.HostConfig{Resources: container.Resources{DeviceRequests: gpuOpts.Value()}}, nil, nil, "")
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论