如何在minikube上修复Cilium CrashLoopBackOff?

huangapple go评论93阅读模式
英文:

How to fix Cilium CrashLoopBackOff on minikube?

问题

I installed Meshery with Helm.
我使用Helm安装了Meshery。

All pods are ok except the Cilium.
除了Cilium之外,所有的Pod都正常。

Logs show:
日志显示:

I checked events also.
我还检查了事件。

I am adding logs from troubled pod.
我正在添加有问题的Pod的日志。

The Nginx pod shows network problems.
Nginx Pod显示有网络问题。

How to fix this issue?
如何解决这个问题?

英文:

I installed Meshery with Helm.

All pods are ok except the Cilium

meshery-app-mesh-7d8cb57c85-cfk7j                   1/1     Running            0                63m
meshery-cilium-69b4f7d6c5-t9w55                     0/1     CrashLoopBackOff   16 (3m16s ago)   63m
meshery-consul-67669c97cc-hxvlz                     1/1     Running            0                64m
meshery-istio-687c7ff5bc-bn8mq                      1/1     Running            0                63m
meshery-kuma-66f947b988-887hm                       1/1     Running            0                63m
meshery-linkerd-854975dcb9-rsgtl                    1/1     Running            0                63m
meshery-nginx-sm-55675f57b5-xfpgz                   1/1     Running            0                63m

Logs show

level=info msg="Registering workloads with Meshery Server for version v1.14.0-snapshot.4" app=cilium-adapter
panic: interface conversion: error is *fs.PathError, not *errors.Error

goroutine 42 [running]:
github.com/layer5io/meshkit/errors.GetCode({0x278e020?, 0xc0004f7e30?})

I checked events also

kubectl get events --field-selector involvedObject.name=meshery-cilium-69b4f7d6c5-t9w55
LAST SEEN   TYPE      REASON    OBJECT                                MESSAGE
17m         Normal    Pulling   pod/meshery-cilium-69b4f7d6c5-t9w55   Pulling image "layer5/meshery-cilium:stable-latest"
2m44s       Warning   BackOff   pod/meshery-cilium-69b4f7d6c5-t9w55   Back-off restarting failed container meshery-cilium in pod meshery-cilium-69b4f7d6c5-t9w55_default(d7ccd0e8-27e5-4f40-89bc-f8a6dc8fa25a)

I am adding logs from troubled pod
Events:

  Type     Reason           Age                    From     Message
  ----     ------           ----                   ----     -------
  Normal   Pulling          4h25m (x55 over 8h)    kubelet  Pulling image "layer5/meshery-cilium:stable-latest"
  Warning  BackOff          3h45m (x1340 over 8h)  kubelet  Back-off restarting failed container meshery-cilium in pod meshery-cilium-69b4f7d6c5-t9w55_default(d7ccd0e8-27e5-4f40-89bc-f8a6dc8fa25a)
  Warning  FailedMount      21m (x6 over 21m)      kubelet  MountVolume.SetUp failed for volume "kube-api-access-tcdml" : object "default"/"kube-root-ca.crt" not registered
  Warning  NetworkNotReady  20m (x19 over 21m)     kubelet  network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
  Warning  BackOff          72s (x82 over 19m)     kubelet  Back-off restarting failed container meshery-cilium in pod meshery-cilium-69b4f7d6c5-t9w55_default(d7ccd0e8-27e5-4f40-89bc-f8a6dc8fa25a)

The Nginx pod shows network problems

Events:
  Type     Reason           Age                 From             Message
  ----     ------           ----                ----             -------
  Warning  NodeNotReady     22m                 node-controller  Node is not ready
  Warning  NetworkNotReady  22m (x18 over 23m)  kubelet          network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
  Warning  FailedMount      22m (x7 over 23m)   kubelet          MountVolume.SetUp failed for volume "kube-api-access-lscv2" : object "default"/"kube-root-ca.crt" not registered

How to fix this issue?

答案1

得分: 1

根据此博客由Patrick Londa所述,您的cilium Pod因“Back-off restarting failed container”错误而发生CrashLoopBack off错误,这意味着在Kubernetes启动容器后,您的容器突然终止。<br><br>1. 这是由于活动激增导致的临时资源超载而发生的。修复此问题的方法是调整periodSeconds或timeoutSeconds,以为应用程序提供更长的响应时间窗口。<br><br>2. 有时,这是由于内存资源不足引起的。您可以通过在容器的资源清单中更改**“resources:limits”**来增加内存限制,如博客中所示。<br><br>请验证上述步骤,并告诉我是否解决了问题。<br>

英文:

As per this Blog by Patrick Londa, your cilium Pod is getting CrashLoopBack off error due to the &quot;Back-off restarting failed container&quot;, this means that your container suddenly terminated after Kubernetes started it. <br><br>1. This occurs due to the temporary resource overload as a result of a spike in activity. The solution to fix this is to adjust periodSeconds or timeoutSeconds to give the application a longer window of time to respond. <br><br>2. Sometimes this happens due to insufficient memory resources. You can increase the memory limit by changing the "resources:limits" in the Container's resource manifest as shown in the blog. <br><br>Verify the above steps and let me know if this fixes the issue.<br>

huangapple
  • 本文由 发表于 2023年7月13日 13:13:24
  • 转载请务必保留本文链接:https://go.coder-hub.com/76676124.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定