英文:
Restart Connector at timed intervals
问题
我有一个连接到Oracle数据库的JDBC连接器,但是有时连接器停止接收来自数据库的数据。有时任务会显示错误,有时则不会。
我发现解决这个问题的唯一方法是定时重新启动任务。考虑到这一点,是否有办法在Kafka中直接完成这个操作,更具体地说,在源连接器的YAML中?
英文:
I have a JDBC connector on an oracle base, however, sometimes the connector stops receiving data from the DB. Sometimes the task indicates an error, sometimes not.
The only way I found to solve this problem was to restart the task at timed intervals. Considering that, is there any way to do this directly in Kafka, more specifically in the source connector yaml?
答案1
得分: 0
基于另一个 Stack Overflow 的解决方案,我使用了 Kubernetes 的 CronJob
来执行此操作。下面的 CronJob
每天都会终止连接(connect)Pod。(这是我找到的唯一解决办法)。
apiVersion: batch/v1
kind: CronJob
metadata:
name: kill-connect-pod
spec:
schedule: "0 8 * * *"
successfulJobsHistoryLimit: 0
failedJobsHistoryLimit: 0
jobTemplate:
spec:
template:
spec:
serviceAccountName: kafka-connect-killer
containers:
- name: kill-connect-pod
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- |
kubectl delete pod $(kubectl get pods | grep ^edh-connect | awk '{print $1}')
restartPolicy: OnFailure
请注意,我只翻译了代码部分,没有包含其他内容。
英文:
Based on another Stack Overflow solution, I used a Kubernetes CronJob
to do this. The CronJob
below kills the connect pod on a daily basis. (This is the only I found to solve the problem).
apiVersion: batch/v1
kind: CronJob
metadata:
name: kill-connect-pod
spec:
schedule: "0 8 * * *"
successfulJobsHistoryLimit: 0
failedJobsHistoryLimit: 0
jobTemplate:
spec:
template:
spec:
serviceAccountName: kafka-connect-killer
containers:
- name: kill-connect-pod
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- |
kubectl delete pod $(kubectl get pods | grep ^edh-connect | awk '{print $1}')
restartPolicy: OnFailure
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论