英文:
Send kubernetes container application logs (stdout and stderr) to UDP server
问题
Docker提供了像log-driver
和log-opts
这样的选项,可以用来将日志发送到UDP服务器。Marathon是Docker的编排器,配置文件如下:
{
"key": "log-driver",
"value": "syslog"
},
{
"key": "log-opt",
"value": "syslog-address=udp://some-udp-server:port"
}
现有的设置是,某些下游系统/实体使用在UDP服务器上接收的信息来在Grafana上创建可视化。
我如何在通过helm3部署的k8s清单文件中实现相同的功能?或者我需要使用第三方应用程序吗?基本上,我想将通过kubectl logs -f <pod_name>
命令接收的日志发送到这个UDP服务器,而不会对下游系统产生任何干扰。
英文:
I know docker has options like syslog log-driver
and log-opts
so that it can be used to send the logs to say, the UDP server.
Marathon is the docker orchestrator here and a config file has the below:
{
"key": "log-driver",
"value": "syslog"
},
{
"key": "log-opt",
"value": "syslog-address=udp://some-udp-server:port"
},
The existing setup is such that certain downstream systems/entities take the information received on this UDP server to create visualisations on Grafana.
How do I achieve the same in a k8s manifest file that I'm deploying via helm3? Or is there a third-party application I need to use? Basically, I want to send the logs that come in the kubectl logs -f <pod_name>
command to this UDP server with minimal intrusion. I would only like to replace this part of the flow so that I don't have to disturb any of the downstream systems.
答案1
得分: 2
根据David的建议,没有控制日志目标的选项。但是,针对日志收集应用程序的要求,我写下了这个答案。
如果你的应用程序正在流式传输****UDP日志,你可以使用**Graylog开源工具。它使用Mongo & Elasticsearch作为后端数据库。我们一直在使用Graylog**来收集应用程序POD的日志。
关于kubectl logs -f <POD>
的日志收集器,你可以使用fluentd收集器从Worker Node文件系统中推送所有这些日志。日志位置将位于/var/log/pods
。
你可以使用Fluentd收集器以及Graylog的Gelf UDP输入。
Fluentd -> 通过gelf UDP进行推送 -> Graylog输入保存到Elasticsearch
你可以参考这里:https://docs.fluentd.org/how-to-guides/graylog2
上面的示例使用Graylog2,现在Graylog3版本也是开源的,建议查看一下。
你可以参考我的GitHub存储库:https://github.com/harsh4870/OCI-public-logging-uma-agent
可以更好地了解如何在Node的文件系统上设置日志文件的部署,并由收集器进一步处理,尽管不使用fluentd,只是作为参考。
Oracle OCI UMA代理也有类似的工作,类似于fluentd收集器,解析和推送日志到后端。
英文:
As David suggested there is no option to control the log target. However as requested for log collector application writing this answer.
If your application is streaming the UDP logs you can use the Graylog Opensource. It uses Mongo & Elasticsearch as backend databases. We been using Graylog to collect logs from the application POD.
Now regarding the log collector for kubectl logs -f <POD>
you can push all these logs from the Worker Node file system using the fluentd collector. Log location will be /var/log/pods
You can use the Fluentd collector along with the Graylog Gelf UDP input
Fluentd -> pushing over gelf UDP -> Graylog input saving to Elasticsearch
Here is the ref you can follow : https://docs.fluentd.org/how-to-guides/graylog2
Above example uses Graylog2 now Graylog3 version is available opensource would suggest checking out that.
You can refer my Github repo : https://github.com/harsh4870/OCI-public-logging-uma-agent
Will get more idea about how deployment setting up log file on Node's filesystem and further it gets processed by collector although not using fluentd but just for ref.
Oracle OCI UMA agent also similar job like fluentd collector only, parsing & pushing logs to the backend.
答案2
得分: 0
1-graylog GELF Driver
1-Graylog的GELF驱动程序
2-EFK
2-EFK
3- ...
3-...
And have an independent (container name & container ID) log for each container
并为每个容器拥有独立的(容器名称和容器ID)日志
https://devopscube.com/setup-efk-stack-on-kubernetes/#:~:text=Conclusion-,What%20is%20EFK%20Stack%3F,large%20volumes%20of%20log%20data.
https://devopscube.com/setup-efk-stack-on-kubernetes/#:~:text=Conclusion-,What%20is%20EFK%20Stack%3F,large%20volumes%20of%20log%20data.
.................................................
.................................................
input:
tcp:
service:
type: ClusterIP
ports:
- name: gelfHttp
port: 12221
...................................................
英文:
You can use services like use
1-graylog GELF Driver
2-EFK
3- ...
And have an independent (container name & container ID) log for each container
.................................................
input:
tcp:
service:
type: ClusterIP
ports:
- name: gelfHttp
port: 12221
.........................................................
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论