将Kubernetes容器应用程序日志(stdout和stderr)发送到UDP服务器

huangapple go评论84阅读模式
英文:

Send kubernetes container application logs (stdout and stderr) to UDP server

问题

Docker提供了像log-driverlog-opts这样的选项,可以用来将日志发送到UDP服务器。Marathon是Docker的编排器,配置文件如下:

{
"key": "log-driver",
"value": "syslog"
},
{
"key": "log-opt",
"value": "syslog-address=udp://some-udp-server:port"
}

现有的设置是,某些下游系统/实体使用在UDP服务器上接收的信息来在Grafana上创建可视化。

我如何在通过helm3部署的k8s清单文件中实现相同的功能?或者我需要使用第三方应用程序吗?基本上,我想将通过kubectl logs -f <pod_name>命令接收的日志发送到这个UDP服务器,而不会对下游系统产生任何干扰。

英文:

I know docker has options like syslog log-driver and log-opts so that it can be used to send the logs to say, the UDP server.

Marathon is the docker orchestrator here and a config file has the below:

    {
      &quot;key&quot;: &quot;log-driver&quot;,
      &quot;value&quot;: &quot;syslog&quot;
    },
    {
      &quot;key&quot;: &quot;log-opt&quot;,
      &quot;value&quot;: &quot;syslog-address=udp://some-udp-server:port&quot;
    },

The existing setup is such that certain downstream systems/entities take the information received on this UDP server to create visualisations on Grafana.

How do I achieve the same in a k8s manifest file that I'm deploying via helm3? Or is there a third-party application I need to use? Basically, I want to send the logs that come in the kubectl logs -f &lt;pod_name&gt; command to this UDP server with minimal intrusion. I would only like to replace this part of the flow so that I don't have to disturb any of the downstream systems.

答案1

得分: 2

根据David的建议,没有控制日志目标的选项。但是,针对日志收集应用程序的要求,我写下了这个答案。

如果你的应用程序正在流式传输****UDP日志,你可以使用**Graylog开源工具。它使用Mongo & Elasticsearch作为后端数据库。我们一直在使用Graylog**来收集应用程序POD的日志。

关于kubectl logs -f <POD>日志收集器,你可以使用fluentd收集器从Worker Node文件系统中推送所有这些日志。日志位置将位于/var/log/pods

你可以使用Fluentd收集器以及Graylog的Gelf UDP输入。

Fluentd -> 通过gelf UDP进行推送 -> Graylog输入保存到Elasticsearch

你可以参考这里:https://docs.fluentd.org/how-to-guides/graylog2

上面的示例使用Graylog2,现在Graylog3版本也是开源的,建议查看一下。

你可以参考我的GitHub存储库:https://github.com/harsh4870/OCI-public-logging-uma-agent

可以更好地了解如何在Node的文件系统上设置日志文件的部署,并由收集器进一步处理,尽管不使用fluentd,只是作为参考。

Oracle OCI UMA代理也有类似的工作,类似于fluentd收集器,解析推送日志到后端。

英文:

As David suggested there is no option to control the log target. However as requested for log collector application writing this answer.

If your application is streaming the UDP logs you can use the Graylog Opensource. It uses Mongo & Elasticsearch as backend databases. We been using Graylog to collect logs from the application POD.

Now regarding the log collector for kubectl logs -f &lt;POD&gt; you can push all these logs from the Worker Node file system using the fluentd collector. Log location will be /var/log/pods

You can use the Fluentd collector along with the Graylog Gelf UDP input

Fluentd -&gt; pushing over gelf UDP -&gt; Graylog input saving to Elasticsearch 

Here is the ref you can follow : https://docs.fluentd.org/how-to-guides/graylog2

Above example uses Graylog2 now Graylog3 version is available opensource would suggest checking out that.

You can refer my Github repo : https://github.com/harsh4870/OCI-public-logging-uma-agent

Will get more idea about how deployment setting up log file on Node's filesystem and further it gets processed by collector although not using fluentd but just for ref.

Oracle OCI UMA agent also similar job like fluentd collector only, parsing & pushing logs to the backend.

答案2

得分: 0

1-graylog GELF Driver
1-Graylog的GELF驱动程序

2-EFK
2-EFK

3- ...
3-...

And have an independent (container name & container ID) log for each container
并为每个容器拥有独立的(容器名称和容器ID)日志

https://devopscube.com/setup-efk-stack-on-kubernetes/#:~:text=Conclusion-,What%20is%20EFK%20Stack%3F,large%20volumes%20of%20log%20data.
https://devopscube.com/setup-efk-stack-on-kubernetes/#:~:text=Conclusion-,What%20is%20EFK%20Stack%3F,large%20volumes%20of%20log%20data.

.................................................
.................................................
input:
tcp:
service:
type: ClusterIP
ports:
- name: gelfHttp
port: 12221
...................................................

英文:

You can use services like use

1-graylog GELF Driver

2-EFK

3- ...

And have an independent (container name & container ID) log for each container

https://devopscube.com/setup-efk-stack-on-kubernetes/#:~:text=Conclusion-,What%20is%20EFK%20Stack%3F,large%20volumes%20of%20log%20data.

.................................................
input:
    tcp:
      service:
        type: ClusterIP
      ports:
        - name: gelfHttp
          port: 12221
.........................................................

huangapple
  • 本文由 发表于 2023年3月10日 01:58:07
  • 转载请务必保留本文链接:https://go.coder-hub.com/75688388.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定