logstash通过rsyslog对nginx的日志收集和分析

logstash&elasticsearch&kibana的安装和配置

  这一篇文章里面是以nginx打补丁的方式实现rsyslog把nginx的日志同步到logstash做分析,不过线上环境种种不一样,下面是把nginx的日志直接通过rsyslog同步到logstash服务器上,不用对nginx做更改,相对来说更简单明了。

nginx服务器端

nginx的配置文件不用改动,例子:

[root@db2 ~]# grep -v ^.*# /usr/local/nginx/conf/nginx.conf|sed '/^$/d'worker_processes  1;events {    worker_connections  1024;}http {    include       mime.types;    default_type  application/octet-stream;    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '                      '$status $body_bytes_sent "$http_referer" '                      '"$http_user_agent" "$http_x_forwarded_for"';    sendfile        on;    keepalive_timeout  65;    server {        listen       80;        server_name  localhost;index index.html;                                    #默认配置,修改了下面几行root /var/www;access_log /var/log/nginx/access.log main;error_log /var/log/nginx/error.log;        error_page   500 502 503 504  /50x.html;        location = /50x.html {            root   html;        }    }}

rsyslog的配置

[root@db2 ~]# grep -v ^# /etc/rsyslog.conf|sed '/^$/d'$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)$ModLoad imklog   # provides kernel logging support (previously done by rklogd)$ModLoad imfile   # imfile模块必须启用 Load the imfile input module $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat$IncludeConfig /etc/rsyslog.d/*.conf*.info;mail.none;authpriv.none;cron.none                /var/log/messagesauthpriv.*                                              /var/log/securemail.*                                                  -/var/log/maillogcron.*                                                  /var/log/cron*.emerg                                                 *uucp,news.crit                                          /var/log/spoolerlocal7.*                                                /var/log/boot.log#下面是nginx的设置$InputFileName /var/log/nginx/error.log$InputFileTag kibana-nginx-errorlog:$InputFileStateFile state-kibana-nginx-errorlog$InputRunFileMonitor$InputFileName /var/log/nginx/access.log$InputFileTag kibana-nginx-accesslog:$InputFileStateFile state-kibana-nginx-accesslog$InputRunFileMonitor$InputFilePollInterval 10                 #等待10秒钟发送一次if $programname == 'kibana-nginx-errorlog' then @192.168.10.1:514if $programname == 'kibana-nginx-errorlog' then ~if $programname == 'kibana-nginx-accesslog' then @192.168.10.1:514if $programname == 'kibana-nginx-accesslog' then ~*.* @192.168.10.1:514

配置说明:

$InputFileTag定义的NAME必须唯一,同一台主机上不同的应用应当使用不同的NAME,否则会导致新定义的TAG不生效;

$InputFileStateFile定义的StateFile必须唯一,它被rsyslog用于记录文件上传进度,否则会导致混乱;

@192.168.10.1:514用于指定接收日志的服务器域名或者主机名;

有需要的话,$InputFileSeverity info 也添加上

再把rsyslog服务重启

[root@db2 ~]# service rsyslog restartShutting down system logger:                               [  OK  ]Starting system logger:                                    [  OK  ]

现在nginx的日志,已经同步到logstash服务器的/var/log/messages,如下图

logstash.conf 配置

input { file {    type => "syslog"#    path => [ "/var/log/*.log", "/var/log/messages", "/var/log/syslog" ]    path => [ "/var/log/messages" ]    sincedb_path => "/var/sincedb"  }  redis {    host => "192.168.10.1"    type => "redis-input"    data_type => "list"    key => "logstash"  }  syslog {    type => "syslog"    port => "5544"  }}filter {  grok {    type => "syslog"    match => [ "message", "%{SYSLOGBASE2}" ]    add_tag => [ "syslog", "grokked" ]  }}output { elasticsearch { host => "192.168.10.1" }}

nginx的日志:

Feb 26 14:41:47 db2 kibana-nginx-accesslog: 192.168.10.50 - - [26/Feb/2015:14:41:42 +0800] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.2; WOW64; Trident/7.0; rv:11.0) like Gecko LBBROWSER" "-"

logstash界面:

参考: