英文:
AKS with Keda: pods get terminated during execution
问题
I have deployed the Azure Queue Trigger Function to Kubernetes with KEDA for event-driven activation and scaling. It works fine with smaller jobs, but if the time taken exceeds 5 minutes, the pods get terminated automatically, and new pods get created. Here is my YAML file, and I have set the functionTimeout to 00:30:00 in host.json. Any help would be highly appreciated.
apiVersion: apps/v1
kind: Deployment
metadata:
name: queue-test
labels:
app: queue-test
spec:
selector:
matchLabels:
app: queue-test
template:
metadata:
labels:
app: queue-test
spec:
containers:
- name: queue-test
image:
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: queue-test
labels: {}
spec:
scaleTargetRef:
name: queue-test
triggers:
- type: azure-queue
metadata:
direction: in
queueName: sample-queue
connectionFromEnv: AzureWebJobsStorage
英文:
I have deployed the Azure Queue Trigger Function to Kubernetes with KEDA for event-driven activation and scaling. It works fine with smaller jobs, but if the time taken exceeds 5 minutes, the pods get terminated automatically, and new pods get created. Here is my YAML file, and I have set the functionTimeout to 00:30:00 in host.json. Any help would be highly appreciated.
apiVersion: apps/v1
kind: Deployment
metadata:
name: queue-test
labels:
app: queue-test
spec:
selector:
matchLabels:
app: queue-test
template:
metadata:
labels:
app: queue-test
spec:
containers:
- name: queue-test
image:
---
---
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: queue-test
labels: {}
spec:
scaleTargetRef:
name: queue-test
triggers:
- type: azure-queue
metadata:
direction: in
queueName: sample-queue
connectionFromEnv: AzureWebJobsStorage
---
答案1
得分: 1
Pods在5分钟后终止可能是由于Kubernetes Liveness探针、Pod资源限制或KEDA扩展行为引起的。
调试步骤:
- 检查Pod日志:kubectl logs <pod_name> --previous。
- 调整Liveness探针参数或禁用它。
- 增加Kubernetes部署中的资源限制。
- 检查并调整KEDA触发器的敏感性。
希望这对您有所帮助!
英文:
Pods getting terminated after 5 minutes could be due to Kubernetes Liveness Probes, Pod resource limits, or KEDA scaling behavior.
Steps to debug:
- Check pod logs: kubectl logs <pod_name> --previous.
- Adjust Liveness Probe parameters or disable it.
- Increase resource limits in your Kubernetes Deployment.
- Review and adjust KEDA trigger sensitivity.
I hope this helps!
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论