Docker Compose不创建网络并返回错误消息:”ERROR no vni provided”。

huangapple go评论93阅读模式
英文:

docker compose does not create network and return : "ERROR no vni provided"

问题

我更新了我的Ubuntu 22.04,包括Docker(所有这些都在VMware Workstation虚拟机中)。我尝试使用以下网络参数运行Docker Compose。

networks:
  gateway:
    driver: overlay

每次我都得到这个错误:

Error response from daemon: No VNI provided.

有其他人遇到同样的问题吗?你是如何解决的?

编辑:
我成功运行了Docker Compose,但我不得不在计算机上初始化一个Docker Swarm(并使用--advertise-addr指定了一个IPv6地址),然后我不得不在docker-compose文件中添加attachable:true参数。

以下是日志:(日志内容太长,无法一一翻译,请自行阅读)

英文:

I updated my ubuntu 22.04 including docker (all of this is in a vmware workstation virtual machine).
I'm trying to run docker compose with this network parameter .

networks :
   gateway:
     driver: overlay 

each time I got this :
Error response from daemon: No VNI provided .

Does anyone else have the same issue ?? How did you fix it ??

EDIT :
I did succeed to run docker compose , but I had to init a docker swarm on the computer (and specify an ipv6 address using --advertise-addr)
Then I had to add the attachable : true parameter inside the docker-compose file.

here are the logs :

May 24 18:59:53 jobvm systemd[1]: var-lib-docker-overlay2-8fd8d17f71538934b364ebf00981229478771efc92771b8462f56c521dde6fee\x2dinit-merged.mount: Deactivated successfully.
May 24 18:59:53 jobvm systemd[1]: var-lib-docker-overlay2-8fd8d17f71538934b364ebf00981229478771efc92771b8462f56c521dde6fee-merged.mount: Deactivated successfully.
May 24 18:59:54 jobvm dockerd[6782]: time="2023-05-24T18:59:54.011252035+02:00" level=info msg="initialized VXLAN UDP port to 4789 "
May 24 18:59:54 jobvm kernel: [ 3724.339025] br0: renamed from ov-001003-i54kf
May 24 18:59:54 jobvm NetworkManager[1124]: <info>  [1684947594.3994] manager: (vx-001003-i54kf): new Vxlan device (/org/freedesktop/NetworkManager/Devices/89)
May 24 18:59:54 jobvm systemd-udevd[11077]: Using default interface naming scheme 'v249'.
May 24 18:59:54 jobvm kernel: [ 3724.426143] vxlan0: renamed from vx-001003-i54kf
May 24 18:59:54 jobvm kernel: [ 3724.465417] br0: port 1(vxlan0) entered blocking state
May 24 18:59:54 jobvm kernel: [ 3724.465455] br0: port 1(vxlan0) entered disabled state
May 24 18:59:54 jobvm kernel: [ 3724.485784] device vxlan0 entered promiscuous mode
May 24 18:59:54 jobvm kernel: [ 3724.512803] br0: port 1(vxlan0) entered blocking state
May 24 18:59:54 jobvm kernel: [ 3724.512833] br0: port 1(vxlan0) entered forwarding state
May 24 18:59:54 jobvm gnome-shell[2343]: Removing a network device that was not added
May 24 18:59:55 jobvm kernel: [ 3725.136361] veth0: renamed from vethcac73c0
May 24 18:59:55 jobvm systemd-udevd[11084]: Using default interface naming scheme 'v249'.
May 24 18:59:55 jobvm NetworkManager[1124]: <info>  [1684947595.1404] manager: (veth1beb0ed): new Veth device (/org/freedesktop/NetworkManager/Devices/90)
May 24 18:59:55 jobvm NetworkManager[1124]: <info>  [1684947595.1634] manager: (vethcac73c0): new Veth device (/org/freedesktop/NetworkManager/Devices/91)
May 24 18:59:55 jobvm kernel: [ 3725.232621] br0: port 2(veth0) entered blocking state
May 24 18:59:55 jobvm kernel: [ 3725.232668] br0: port 2(veth0) entered disabled state
May 24 18:59:55 jobvm kernel: [ 3725.233558] device veth0 entered promiscuous mode
May 24 18:59:55 jobvm kernel: [ 3725.235246] br0: port 2(veth0) entered blocking state
May 24 18:59:55 jobvm kernel: [ 3725.235262] br0: port 2(veth0) entered forwarding state
May 24 18:59:55 jobvm gnome-shell[2343]: Removing a network device that was not added
May 24 18:59:55 jobvm kernel: [ 3725.554593] br0: port 2(veth0) entered disabled state
May 24 18:59:55 jobvm kernel: [ 3725.741170] eth0: renamed from veth1beb0ed
May 24 18:59:55 jobvm kernel: [ 3725.949681] br0: port 2(veth0) entered blocking state
May 24 18:59:55 jobvm kernel: [ 3725.951086] br0: port 2(veth0) entered forwarding state
May 24 18:59:55 jobvm gnome-shell[2343]: Removing a network device that was not added
May 24 18:59:56 jobvm dockerd[6782]: time="2023-05-24T18:59:56.163088290+02:00" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]"
May 24 18:59:56 jobvm dockerd[6782]: time="2023-05-24T18:59:56.163116865+02:00" level=info msg="IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]"
May 24 18:59:56 jobvm dockerd[6782]: time="2023-05-24T18:59:56.592933070+02:00" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]"
May 24 18:59:56 jobvm dockerd[6782]: time="2023-05-24T18:59:56.592963599+02:00" level=info msg="IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]"
May 24 18:59:59 jobvm systemd-udevd[11115]: Using default interface naming scheme 'v249'.
May 24 18:59:59 jobvm systemd-udevd[11113]: Using default interface naming scheme 'v249'.
May 24 18:59:59 jobvm NetworkManager[1124]: <info>  [1684947599.1857] manager: (veth5ca75b6): new Veth device (/org/freedesktop/NetworkManager/Devices/92)
May 24 18:59:59 jobvm NetworkManager[1124]: <info>  [1684947599.1868] manager: (veth6019b7d): new Veth device (/org/freedesktop/NetworkManager/Devices/93)
May 24 18:59:59 jobvm NetworkManager[1124]: <info>  [1684947599.1878] manager: (vethdb2d5b6): new Veth device (/org/freedesktop/NetworkManager/Devices/94)
May 24 18:59:59 jobvm NetworkManager[1124]: <info>  [1684947599.1920] manager: (vethc6da593): new Veth device (/org/freedesktop/NetworkManager/Devices/95)
May 24 18:59:59 jobvm kernel: [ 3729.224976] veth1: renamed from veth6019b7d
May 24 18:59:59 jobvm kernel: [ 3729.377436] br0: port 3(veth1) entered blocking state
May 24 18:59:59 jobvm kernel: [ 3729.377439] br0: port 3(veth1) entered disabled state
May 24 18:59:59 jobvm kernel: [ 3729.377834] device veth1 entered promiscuous mode
May 24 18:59:59 jobvm kernel: [ 3729.378445] veth2: renamed from vethc6da593
May 24 18:59:59 jobvm gnome-shell[2343]: Removing a network device that was not added
May 24 18:59:59 jobvm gnome-shell[2343]: Removing a network device that was not added
May 24 18:59:59 jobvm NetworkManager[1124]: <info>  [1684947599.4273] manager: (vethb3fa547): new Veth device (/org/freedesktop/NetworkManager/Devices/96)
May 24 18:59:59 jobvm kernel: [ 3729.426281] br0: port 4(veth2) entered blocking state
May 24 18:59:59 jobvm kernel: [ 3729.426284] br0: port 4(veth2) entered disabled state
May 24 18:59:59 jobvm kernel: [ 3729.426331] device veth2 entered promiscuous mode
May 24 18:59:59 jobvm kernel: [ 3729.426438] br0: port 4(veth2) entered blocking state
May 24 18:59:59 jobvm kernel: [ 3729.426439] br0: port 4(veth2) entered forwarding state
May 24 18:59:59 jobvm kernel: [ 3729.427114] docker_gwbridge: port 2(veth1bcd1b7) entered blocking state
May 24 18:59:59 jobvm kernel: [ 3729.427116] docker_gwbridge: port 2(veth1bcd1b7) entered disabled state
May 24 18:59:59 jobvm kernel: [ 3729.427242] device veth1bcd1b7 entered promiscuous mode
May 24 18:59:59 jobvm kernel: [ 3729.427649] docker_gwbridge: port 2(veth1bcd1b7) entered blocking state
May 24 18:59:59 jobvm kernel: [ 3729.427652] docker_gwbridge: port 2(veth1bcd1b7) entered forwarding state
May 24 18:59:59 jobvm NetworkManager[1124]: <info>  [1684947599.4286] manager: (veth1bcd1b7): new Veth device (/org/freedesktop/NetworkManager/Devices/97)
May 24 18:59:59 jobvm systemd-udevd[11131]: Using default interface naming scheme 'v249'.
May 24 18:59:59 jobvm NetworkManager[1124]: <info>  [1684947599.4357] manager: (veth4ca5297): new Veth device (/org/freedesktop/NetworkManager/Devices/98)
May 24 18:59:59 jobvm kernel: [ 3729.434940] docker_gwbridge: port 3(vethdc001de) entered blocking state
May 24 18:59:59 jobvm kernel: [ 3729.434946] docker_gwbridge: port 3(vethdc001de) entered disabled state
May 24 18:59:59 jobvm kernel: [ 3729.435087] device vethdc001de entered promiscuous mode
May 24 18:59:59 jobvm kernel: [ 3729.435671] docker_gwbridge: port 3(vethdc001de) entered blocking state
May 24 18:59:59 jobvm kernel: [ 3729.435673] docker_gwbridge: port 3(vethdc001de) entered forwarding state
May 24 18:59:59 jobvm NetworkManager[1124]: <info>  [1684947599.4375] manager: (vethdc001de): new Veth device (/org/freedesktop/NetworkManager/Devices/99)
May 24 18:59:59 jobvm containerd[1211]: time="2023-05-24T18:59:59.462723059+02:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
May 24 18:59:59 jobvm containerd[1211]: time="2023-05-24T18:59:59.462781160+02:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
May 24 18:59:59 jobvm containerd[1211]: time="2023-05-24T18:59:59.462788985+02:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 24 18:59:59 jobvm containerd[1211]: time="2023-05-24T18:59:59.462978897+02:00" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/e72abc89230a93dbd8de3bda66bf8ad91795798da5bc3b403687855a60f52593 pid=11191 runtime=io.containerd.runc.v2
May 24 18:59:59 jobvm systemd[1]: Started libcontainer container e72abc89230a93dbd8de3bda66bf8ad91795798da5bc3b403687855a60f52593.
May 24 18:59:59 jobvm containerd[1211]: time="2023-05-24T18:59:59.493194858+02:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
May 24 18:59:59 jobvm containerd[1211]: time="2023-05-24T18:59:59.493247518+02:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
May 24 18:59:59 jobvm containerd[1211]: time="2023-05-24T18:59:59.493273348+02:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 24 18:59:59 jobvm containerd[1211]: time="2023-05-24T18:59:59.493414016+02:00" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/e7d9fa8cd2d70150e012730b41e1fbd3d4d9da9cc816f2fc0219821f46e30131 pid=11234 runtime=io.containerd.runc.v2
May 24 18:59:59 jobvm systemd[1]: run-docker-runtime\x2drunc-moby-e7d9fa8cd2d70150e012730b41e1fbd3d4d9da9cc816f2fc0219821f46e30131-runc.zHc9AQ.mount: Deactivated successfully.
May 24 18:59:59 jobvm systemd[1]: Started libcontainer container e7d9fa8cd2d70150e012730b41e1fbd3d4d9da9cc816f2fc0219821f46e30131.
May 24 18:59:59 jobvm systemd-udevd[11129]: Using default interface naming scheme 'v249'.
May 24 18:59:59 jobvm systemd-udevd[11089]: Using default interface naming scheme 'v249'.
May 24 18:59:59 jobvm kernel: [ 3729.601012] eth0: renamed from vethdb2d5b6
May 24 18:59:59 jobvm gnome-shell[2343]: Removing a network device that was not added
May 24 18:59:59 jobvm kernel: [ 3729.613178] docker_gwbridge: port 2(veth1bcd1b7) entered disabled state
May 24 18:59:59 jobvm kernel: [ 3729.613496] docker_gwbridge: port 3(vethdc001de) entered disabled state
May 24 18:59:59 jobvm kernel: [ 3729.637543] eth1: renamed from veth4ca5297
May 24 18:59:59 jobvm NetworkManager[1124]: <info>  [1684947599.6535] device (vethdc001de): carrier: link connected
May 24 18:59:59 jobvm kernel: [ 3729.652800] IPv6: ADDRCONF(NETDEV_CHANGE): vethdc001de: link becomes ready
May 24 18:59:59 jobvm kernel: [ 3729.652890] docker_gwbridge: port 3(vethdc001de) entered blocking state
May 24 18:59:59 jobvm kernel: [ 3729.652892] docker_gwbridge: port 3(vethdc001de) entered forwarding state
May 24 18:59:59 jobvm gnome-shell[2343]: Removing a network device that was not added
May 24 18:59:59 jobvm kernel: [ 3729.688790] eth0: renamed from veth5ca75b6
May 24 18:59:59 jobvm kernel: [ 3729.717129] br0: port 3(veth1) entered blocking state
May 24 18:59:59 jobvm kernel: [ 3729.717134] br0: port 3(veth1) entered forwarding state
May 24 18:59:59 jobvm kernel: [ 3729.745070] eth1: renamed from vethb3fa547
May 24 18:59:59 jobvm gnome-shell[2343]: Removing a network device that was not added
May 24 18:59:59 jobvm NetworkManager[1124]: <info>  [1684947599.7739] device (veth1bcd1b7): carrier: link connected
May 24 18:59:59 jobvm kernel: [ 3729.772881] IPv6: ADDRCONF(NETDEV_CHANGE): veth1bcd1b7: link becomes ready
May 24 18:59:59 jobvm kernel: [ 3729.772974] docker_gwbridge: port 2(veth1bcd1b7) entered blocking state
May 24 18:59:59 jobvm kernel: [ 3729.772977] docker_gwbridge: port 2(veth1bcd1b7) entered forwarding state
May 24 18:59:59 jobvm gnome-shell[2343]: Removing a network device that was not added
May 24 19:00:00 jobvm systemd[1]: docker-e72abc89230a93dbd8de3bda66bf8ad91795798da5bc3b403687855a60f52593.scope: Deactivated successfully.
May 24 19:00:00 jobvm dockerd[6782]: time="2023-05-24T19:00:00.047287409+02:00" level=info msg="ignoring event" container=e72abc89230a93dbd8de3bda66bf8ad91795798da5bc3b403687855a60f52593 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 24 19:00:00 jobvm containerd[1211]: time="2023-05-24T19:00:00.047403871+02:00" level=info msg="shim disconnected" id=e72abc89230a93dbd8de3bda66bf8ad91795798da5bc3b403687855a60f52593
May 24 19:00:00 jobvm containerd[1211]: time="2023-05-24T19:00:00.047447705+02:00" level=warning msg="cleaning up after shim disconnected" id=e72abc89230a93dbd8de3bda66bf8ad91795798da5bc3b403687855a60f52593 namespace=moby
May 24 19:00:00 jobvm containerd[1211]: time="2023-05-24T19:00:00.047464967+02:00" level=info msg="cleaning up dead shim"
May 24 19:00:00 jobvm containerd[1211]: time="2023-05-24T19:00:00.053601909+02:00" level=warning msg="cleanup warnings time=\"2023-05-24T19:00:00+02:00\" level=info msg=\"starting signal loop\" namespace=moby pid=11340 runtime=io.containerd.runc.v2\n"
May 24 19:00:00 jobvm kernel: [ 3730.055925] br0: port 4(veth2) entered disabled state
May 24 19:00:00 jobvm kernel: [ 3730.055971] vethdb2d5b6: renamed from eth0
May 24 19:00:00 jobvm NetworkManager[1124]: <info>  [1684947600.1017] manager: (vethdb2d5b6): new Veth device (/org/freedesktop/NetworkManager/Devices/100)
May 24 19:00:00 jobvm kernel: [ 3730.103787] docker_gwbridge: port 3(vethdc001de) entered disabled state
May 24 19:00:00 jobvm kernel: [ 3730.103964] veth4ca5297: renamed from eth1
May 24 19:00:00 jobvm NetworkManager[1124]: <info>  [1684947600.1460] manager: (veth4ca5297): new Veth device (/org/freedesktop/NetworkManager/Devices/101)
May 24 19:00:00 jobvm kernel: [ 3730.146807] docker_gwbridge: port 3(vethdc001de) entered disabled state
May 24 19:00:00 jobvm kernel: [ 3730.147441] device vethdc001de left promiscuous mode
May 24 19:00:00 jobvm kernel: [ 3730.147443] docker_gwbridge: port 3(vethdc001de) entered disabled state
May 24 19:00:00 jobvm NetworkManager[1124]: <info>  [1684947600.1857] device (vethdc001de): released from master device docker_gwbridge
May 24 19:00:00 jobvm gnome-shell[2343]: Removing a network device that was not added
May 24 19:00:00 jobvm kernel: [ 3730.213417] br0: port 4(veth2) entered disabled state
May 24 19:00:00 jobvm kernel: [ 3730.213686] device veth2 left promiscuous mode
May 24 19:00:00 jobvm kernel: [ 3730.213688] br0: port 4(veth2) entered disabled state
May 24 19:00:00 jobvm gnome-shell[2343]: message repeated 2 times: [ Removing a network device that was not added]
May 24 19:00:00 jobvm systemd[1]: run-docker-netns-b4a8d0809dc9.mount: Deactivated successfully.
May 24 19:00:00 jobvm systemd[1]: var-lib-docker-overlay2-c5223a1719d4dda5429fde03bd331827868f28a0fa686ad9e7520edd0bc6388e-merged.mount: Deactivated successfully.
May 24 19:00:01 jobvm avahi-daemon[1119]: Joining mDNS multicast group on interface veth1bcd1b7.IPv6 with address fe80::2c90:faff:fecb:96ca.
May 24 19:00:01 jobvm avahi-daemon[1119]: New relevant interface veth1bcd1b7.IPv6 for mDNS.
May 24 19:00:01 jobvm avahi-daemon[1119]: Registering new address record for fe80::2c90:faff:fecb:96ca on veth1bcd1b7.*.

答案1

得分: 1

我在将第二个接口添加到我的服务器并重新创建叠加网络后遇到了相同的问题。
我不得不:

  • 从Docker Swarm中移除节点
  • 重新启动Docker守护程序
  • 重新运行Docker Swarm初始化

重新启动守护程序解决了问题。

英文:

I had the same issue after adding a second interface to my server, and recreating the overlay network.
I had to:

  • remove the node from docker swarm
  • restart the docker daemon
  • rerun docker swarm init

Restarting the daemon did the trick.

huangapple
  • 本文由 发表于 2023年5月24日 23:24:32
  • 转载请务必保留本文链接:https://go.coder-hub.com/76325132.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定