收集不同服务器的日志到中央服务器(Elasticsearch 和 Kibana)。

huangapple go评论102阅读模式
英文:

How to collect log from different servers to a central server(Elastic search and kibana)

问题

I am assigned with task to create a central logging server. In my case there are many web app servers spread across. My task is to get logs from these different servers and manage in central server where there will be elastic-search and kibana.

Question

  1. Is it possible to get logs from servers that are having different public IP? If possible how?
  2. How much resource (CPU, Memory, Storage) is required in central server.

Things seen

  • Saw the examples setups where all logs and applications are on same machine only.

Looking for way to send logs over public IP to elastic-search.

英文:

I am assigned with task to create a central logging server. In my case there are many web app servers spread across. My task is to get logs from these different servers and manage in central server where there will be elastic-search and kibana.

Question

  1. Is it possible to get logs from servers that are having different public IP? If possible how?
  2. How much resource (CPU, Memory, Storage) is required in central server.

Things seen

  • Saw the examples setups where all logs and applications are on same machine only.

Looking for way to send logs over public IP to elastic-search.

答案1

得分: 3

我想与Ishara的答案不同意。您可以直接从filebeat将日志发送到elasticsearch,而无需使用logstash。如果您的日志是通用类型(系统日志、nginx日志、apache日志),使用这种方法,您无需额外花费和维护logstash,因为filebeat提供了内置的解析处理器。

如果您的服务器上使用的是基于debian的操作系统,我已经准备了一个shell脚本来安装和配置filebeat。您需要更改elasticsearch服务器的URL,并根据您想要配置的模块修改倒数第二行。

关于您的第一个问题,是的,您可以在每台服务器上运行filebeat代理并将数据发送到集中的Elasticsearch。对于第二个问题,这取决于Elasticsearch服务器要处理和存储的日志量,也取决于Kibana托管的位置。

sudo wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

sudo echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

sudo apt-get update && sudo apt-get install -y filebeat

sudo systemctl enable filebeat

sudo bash -c "cat >/etc/filebeat/filebeat.yml" <<FBEOL
filebeat.inputs:

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

setup.template.name: "filebeat-system"
setup.template.pattern: "filebeat-system-*"
setup.template.settings:
  index.number_of_shards: 1

setup.ilm.enabled: false

setup.kibana:

output.elasticsearch:
  hosts: ["10.32.66.55:9200", "10.32.67.152:9200", "10.32.66.243:9200"]
  indices:
    - index: "filebeat-system-%{+yyyy.MM.dd}"
      when.equals:
        event.module: system

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

logging.level: warning

FBEOL

sudo filebeat modules enable system
sudo systemctl restart filebeat
英文:

I would like to differ from the Ishara's Answer. You can ship logs directly from filebeat to elasticsearch without using logstash, If your logs are generic types(system logs, nginx logs, apache logs), Using this approach You don't need to go into incur extra cost and maintenance of logstash as filebeat provides inbuilt parsing processor.

If you have debian based OS on your server, I have prepared a shell script to install and configure filebeat. You need to change elasticsearch server URL and modify second last line based on the modules that you want to configure.

Regarding your first question, Yes, You can run filebeat agent on each server and send data to centralize Elasticsearch.
For your second question, It depends on the amount of logs elasticsearch server is going to process and store. It also depends on the where kibana is hosted.

sudo wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

sudo echo &quot;deb https://artifacts.elastic.co/packages/7.x/apt stable main&quot; | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

sudo apt-get update &amp;&amp; sudo apt-get install -y filebeat

sudo systemctl enable filebeat

sudo bash -c  &quot;cat &gt;/etc/filebeat/filebeat.yml&quot; &lt;&lt;FBEOL
filebeat.inputs:

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

setup.template.name: &quot;filebeat-system&quot;
setup.template.pattern: &quot;filebeat-system-*&quot;
setup.template.settings:
  index.number_of_shards: 1

setup.ilm.enabled: false

setup.kibana:

output.elasticsearch:
  hosts: [&quot;10.32.66.55:9200&quot;, &quot;10.32.67.152:9200&quot;, &quot;10.32.66.243:9200&quot;]
  indices:
    - index: &quot;filebeat-system-%{+yyyy.MM.dd}&quot;
      when.equals:
        event.module: system

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

logging.level: warning

FBEOL

sudo filebeat modules enable system
sudo systemctl restart filebeat

答案2

得分: 2

  1. 是的,可以从具有不同公共IP的服务器获取日志。您需要为每台生成日志的服务器设置类似于filebeat(由elastic提供)的代理。
  • 您需要在每台机器上设置filebeat实例。

它将监听每台机器上的日志文件,并将它们转发到您在filebeat.yml配置文件中指定的logstash实例,如下所示:

#=========================== Filebeat inputs =============================

filebeat.inputs:

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /path_to_your_log_1/ELK/your_log1.log
    - /path_to_your_log_2/ELK/your_log2.log

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["logstash服务器的私有IP:5044"]
  • Logstash服务器监听5044端口,并通过Logstash配置文件流式传输所有日志:

      input {
            beats { port => 5044 }      
      }
      filter {
          # 这里是您的日志过滤逻辑
      }
      output {
              elasticsearch {
                  hosts => ["elasticcsearch服务器的私有IP:9200"]
                  index => "您的索引名称"
              }
      }
    
  • 在Logstash中,您可以对日志进行过滤和拆分,然后将它们发送到Elasticsearch。

  1. 资源需求取决于您生成的数据量、数据保留计划、每秒事务数(TPS)和您的自定义需求。如果您能提供更多细节,我将能够提供关于资源需求的大致估计。
英文:
  1. Yes, it is possible to get logs from servers that are having different public IP. You need to setup an agent like filebeat (provided by elastic) to each server which produce logs.
  • You need to setup filebeat instance in each machine.

It will listen to your log files in each machine and forward them to the logstash instance you would mention in filebeat.yml configuration file like below:

#=========================== Filebeat inputs =============================

filebeat.inputs:

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /path_to_your_log_1/ELK/your_log1.log
    - /path_to_your_log_2/ELK/your_log2.log

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: [&quot;private_ip_of_logstash_server:5044&quot;]
  • Logstash server listens to port 5044 and stream all logs through logstash configuration files:

      input {
            beats { port =&gt; 5044 }      
      }
      filter {
          # your log filtering logic is here
      }
      output {
              elasticsearch {
                  hosts =&gt; [ &quot;elasticcsearch_server_private_ip:9200&quot; ]
                  index =&gt; &quot;your_idex_name&quot;
              }
      }
    
  • In logstash you can filter and split your logs into fields and send them to elasticsearch.

  1. Resources depend on how much of data you produce, data retention plan, TPS and your custom requirements. If you can provide some more details, I would be able to provide a rough idea about resource requirement.

huangapple
  • 本文由 发表于 2020年1月7日 00:14:52
  • 转载请务必保留本文链接:https://go.coder-hub.com/59615417.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定