ELK 相关

开源实时日志分析ELK平台(ElasticSearch, Logstash, Kibana组成),能很方便的帮我们收集日志,进行集中化的管理,并且能很方便的进行日志的统计和检索,下面基于ELK的最新版本5.1进行一次整合测试。


源代码包:

           wget https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.3.4/elasticsearch-2.3.4.tar.gz

           wget https://download.elastic.co/logstash/logstash/logstash-2.3.4.tar.gz
           wget https://download.elasticsearch.org/kibana/kibana/kibana-4.5.3-linux-x64.tar.gz

 

ElasticSearch
1.概述:

Elasticsearch是个开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等。

Logstash是一个完全开源的工具,它可以对你的日志进行收集、分析,并将其存储供以后使用

kibana 是一个开源和免费的工具,它可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以帮助您汇总、分析和搜索重要数据日志。

配合Filebeat使用

基本架构图

其中我们用 Elasticsearch 作为日志的存储和索引平台;Logstash 依靠强大繁多的插件作为日志加工平台,Kibana 用来从 Elasticsearch 获取数据,进行数据可视化,定制数据报表;Filebeat 用来放到各个主机中收集指定位置的日志,将收集到日志发送到 Logstash;Log4j 直接与 Logstash 连接,将日志直接发送到 Logstash(当然此处也可以用 Filebeat 收集 tomcat 的日志)。

2.解压ElasticSearch并进入目录:

unzip elasticsearch-5.1.1.zip 
cd elasticsearch-5.1.1

3.启动ElasticSearch服务:

./bin/elasticsearch

因为这里使用的是root账号进行启动服务的,所以会报如下错误:

        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:125) ~[elasticsearch-5.1.1.jar:5.1.1]
        at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) ~[elasticsearch-5.1.1.jar:5.1.1]
        at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54) ~[elasticsearch-5.1.1.jar:5.1.1]
        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:96) ~[elasticsearch-5.1.1.jar:5.1.1]
        at org.elasticsearch.cli.Command.main(Command.java:62) ~[elasticsearch-5.1.1.jar:5.1.1]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:89) ~[elasticsearch-5.1.1.jar:5.1.1]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:82) ~[elasticsearch-5.1.1.jar:5.1.1]
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
        at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:100) ~[elasticsearch-5.1.1.jar:5.1.1]
        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:176) ~[elasticsearch-5.1.1.jar:5.1.1]
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:306) ~[elasticsearch-5.1.1.jar:5.1.1]
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-5.1.1.jar:5.1.1]
        ... 6 more

错误很明显,不能以root账号来运行elasticsearch,网上有一种解决方法是添加es.insecure.allow.root=true参数,启动方式如下:

./bin/elasticsearch -Des.insecure.allow.root=true

启动日志如下所示:

starts elasticsearch
 
Option                Description                                              
------                -----------                                              
-E <KeyValuePair>     Configure a setting                                      
-V, --version         Prints elasticsearch version information and exits       
-d, --daemonize       Starts Elasticsearch in the background                   
-h, --help            show help                                                
-p, --pidfile <Path>  Creates a pid file in the specified path on start        
-q, --quiet           Turns off standard ouput/error streams logging in console
-s, --silent          show minimal output                                      
-v, --verbose         show verbose output                                      
ERROR: D is not a recognized option

并没有启动成功,但是试了一下在elasticsearch-2.1.1这个版本是可以成功的,官方网站的Issues也给出了说明:

In 5.0.0-alpha3, all of this has been cleaned up. 
The entanglement between system properties and settings has beenremoved. 
This means that the system property es.insecure.allow.root will not automatically 
be converted to a setting which means it's no longer a problem that it's not registered.

更加详细的介绍https://github.com/elastic/elasticsearch/issues/18688

这是出于系统安全考虑设置的条件。由于ElasticSearch可以接收用户输入的脚本并且执行,为了系统安全考虑,建议创建一个单独的用户用来运行ElasticSearch。
4.创建用户组和用户:

groupadd esgroup
useradd esuser -g esgroup -p espassword

更改elasticsearch文件夹及内部文件的所属用户及组:

cd /opt/elk
chown -R esuser:esgroup elasticsearch-5.1.1

切换用户并运行:

su esuser
./bin/elasticsearch

启动日志如下所示:

[esuser@localhost elasticsearch-5.1.1]$ ./bin/elasticsearch
[2017-01-13T11:18:35,020][INFO ][o.e.n.Node               ] [] initializing ...
[2017-01-13T11:18:35,284][INFO ][o.e.e.NodeEnvironment    ] [rBrMTNx] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [21.6gb], net total_space [27.4gb], spins? [unknown], types [rootfs]
[2017-01-13T11:18:35,284][INFO ][o.e.e.NodeEnvironment    ] [rBrMTNx] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-01-13T11:18:35,285][INFO ][o.e.n.Node               ] node name [rBrMTNx] derived from node ID [rBrMTNxCSEehQsfjvxmSgw]; set [node.name] to override
[2017-01-13T11:18:35,308][INFO ][o.e.n.Node               ] version[5.1.1], pid[2612], build[5395e21/2016-12-06T12:36:15.409Z], OS[Linux/3.10.0-327.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_112/25.112-b15]
[2017-01-13T11:18:38,284][INFO ][o.e.p.PluginsService     ] [rBrMTNx] loaded module [aggs-matrix-stats]
[2017-01-13T11:18:38,285][INFO ][o.e.p.PluginsService     ] [rBrMTNx] loaded module [ingest-common]
[2017-01-13T11:18:38,285][INFO ][o.e.p.PluginsService     ] [rBrMTNx] loaded module [lang-expression]
[2017-01-13T11:18:38,285][INFO ][o.e.p.PluginsService     ] [rBrMTNx] loaded module [lang-groovy]
[2017-01-13T11:18:38,285][INFO ][o.e.p.PluginsService     ] [rBrMTNx] loaded module [lang-mustache]
[2017-01-13T11:18:38,285][INFO ][o.e.p.PluginsService     ] [rBrMTNx] loaded module [lang-painless]
[2017-01-13T11:18:38,285][INFO ][o.e.p.PluginsService     ] [rBrMTNx] loaded module [percolator]
[2017-01-13T11:18:38,285][INFO ][o.e.p.PluginsService     ] [rBrMTNx] loaded module [reindex]
[2017-01-13T11:18:38,285][INFO ][o.e.p.PluginsService     ] [rBrMTNx] loaded module [transport-netty3]
[2017-01-13T11:18:38,285][INFO ][o.e.p.PluginsService     ] [rBrMTNx] loaded module [transport-netty4]
[2017-01-13T11:18:38,286][INFO ][o.e.p.PluginsService     ] [rBrMTNx] no plugins loaded
[2017-01-13T11:18:44,427][INFO ][o.e.n.Node               ] initialized
[2017-01-13T11:18:44,441][INFO ][o.e.n.Node               ] [rBrMTNx] starting ...
[2017-01-13T11:18:44,943][INFO ][o.e.t.TransportService   ] [rBrMTNx] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2017-01-13T11:18:44,976][WARN ][o.e.b.BootstrapCheck     ] [rBrMTNx] max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
[2017-01-13T11:18:44,976][WARN ][o.e.b.BootstrapCheck     ] [rBrMTNx] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2017-01-13T11:18:48,633][INFO ][o.e.c.s.ClusterService   ] [rBrMTNx] new_master {rBrMTNx}{rBrMTNxCSEehQsfjvxmSgw}{N9FXgAA5TjC78HBimEJ9kw}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-01-13T11:18:48,762][INFO ][o.e.g.GatewayService     ] [rBrMTNx] recovered [0] indices into cluster_state
[2017-01-13T11:18:48,775][INFO ][o.e.h.HttpServer         ] [rBrMTNx] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2017-01-13T11:18:48,775][INFO ][o.e.n.Node               ] [rBrMTNx] started

日志中启动了两个端口分别是:9300和9200,9300用于跟其他的节点的传输,9200用于接受HTTP请求,ctrl+c可以结束进程
5.后台运行:

./bin/elasticsearch -d

6.简单连接:

curl 127.0.0.1:9200

结果如下所示:

[root@localhost ~]# curl 127.0.0.1:9200
{
  "name" : "rBrMTNx",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "-noR5DxFRsyvAFvAzxl07g",
  "version" : {
    "number" : "5.1.1",
    "build_hash" : "5395e21",
    "build_date" : "2016-12-06T12:36:15.409Z",
    "build_snapshot" : false,
    "lucene_version" : "6.3.0"
  },
  "tagline" : "You Know, for Search"
}

7.因为elasticsearch安装在虚拟机里面,我希望我的主机也可以访问,需要对config/elasticsearch.yml进行配置:

network.host: 192.168.111.131

192.168.111.131是虚拟机里面的地址,修改完之后重新启动,会出现如下错误日志:

ERROR: bootstrap checks failed
max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

解决办法:
切换到root用户,修改配置limits.conf

vi /etc/security/limits.conf 

添加如下内容:

* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096

修改配置sysctl.conf

vi /etc/sysctl.conf 

添加如下内容:

vm.max_map_count=655360

更加详细的参考:Elasticsearch5.0 安装问题集锦
修改完之后重启服务,并在本地浏览器中进行访问,结果如下图所示:

8.停止服务:

ps -ef |grep elasticsearch
kill PID

 

Logstash
Logstash是一个完全开源的工具,可以对你的日志进行收集、过滤,并将其存储供以后使用,参考官网的介绍图:

1.解压进入目录

tar -zxvf logstash-5.1.1.tar.gz
cd logstash-5.1.1

2.添加配置文件

vi config/first-pipeline.conf

添加如下内容:

input {
  log4j {
    host => "192.168.111.131"
    port => 8801
  }
}
output {
    elasticsearch {
        hosts => [ "192.168.111.131:9200" ]
    }
}

3.启动服务

./bin/logstash -f config/first-pipeline.conf

启动成功日志如下:

Sending Logstash's logs to /opt/elk/logstash-5.1.1/logs which is now configured via log4j2.properties
[2017-01-17T11:41:42,978][INFO ][logstash.inputs.log4j    ] Starting Log4j input listener {:address=>"192.168.111.131:8801"}
[2017-01-17T11:41:43,565][INFO ][logstash.inputs.log4j    ] Log4j input
[2017-01-17T11:41:45,658][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://192.168.111.131:9200"]}}
[2017-01-17T11:41:45,659][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:url=>#<URI::HTTP:0x3a685472 URL:http://192.168.111.131:9200>, :healthcheck_path=>"/"}
log4j:WARN No appenders could be found for logger (org.apache.http.client.protocol.RequestAuthCache).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
[2017-01-17T11:41:46,090][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x3a685472 URL:http://192.168.111.131:9200>}
[2017-01-17T11:41:46,115][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-01-17T11:41:46,357][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-01-17T11:41:46,474][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["192.168.111.131:9200"]}
[2017-01-17T11:41:46,495][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
[2017-01-17T11:41:46,514][INFO ][logstash.pipeline        ] Pipeline main started
[2017-01-17T11:41:46,935][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

New Elasticsearch output {:class=>”LogStash::Outputs::ElasticSearch”, :hosts=>[“192.168.111.131:9200”]}表示已经成功连接了指定的Elasticsearch。

Kibana
Kibana可以为Logstash和ElasticSearch提供的日志分析友好的Web界面,可以帮助您汇总、分析和搜索重要数据日志。
1.解压进入目录

tar -zxvf kibana-5.1.1-linux-x86_64.tar.gz
cd kibana-5.1.1-linux-x86_64

2.修改配置文件

vi config/kibana.yml

添加如下配置项:

server.port: 5601
server.host: "192.168.111.130"
elasticsearch.url: "http://192.168.111.131:9200"
kibana.index: ".kibana"

3.启动服务

./bin/kibana

启动成功日志如下:

log   [16:42:01.349] [info][status][plugin:kibana@5.1.1] Status changed from uninitialized to green - Ready
log   [16:42:01.406] [info][status][plugin:elasticsearch@5.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log   [16:42:01.427] [info][status][plugin:console@5.1.1] Status changed from uninitialized to green - Ready
log   [16:42:01.567] [info][status][plugin:timelion@5.1.1] Status changed from uninitialized to green - Ready
log   [16:42:01.574] [info][listening] Server running at http://192.168.111.130:5601

4.浏览器访问

http://192.168.111.130:5601/

默认第一次需要Configure an index pattern,默认的Index name是logstash-*,直接create就行了。

测试
1.准备一个测试工程,往logstash写数据,同时通过kibana查看数据
log4j maven

<dependency>
    <groupId>log4j</groupId>
    <artifactId>log4j</artifactId>
    <version>1.2.17</version>
</dependency>

log4j.properties

log4j.rootLogger=DEBUG, logstash
 
log4j.appender.logstash=org.apache.log4j.net.SocketAppender
log4j.appender.logstash.Port=8801
log4j.appender.logstash.RemoteHost=192.168.111.131
log4j.appender.logstash.ReconnectionDelay=60000
log4j.appender.logstash.LocationInfo=true

测试类App

import org.apache.log4j.Logger;
 
public class App {
    private static final Logger LOGGER = Logger.getLogger(App.class);
 
    public static void main(String[] args) throws Exception {
        for (int i = 0; i < 1000; i++) {
            LOGGER.info("Test [" + i + "]");
        }
    }
}

运行App,会出现如下错误:

SocketException: Software caused connection abort: socket write error

解决方法:

vi logstash-core/lib/logstash-core_jars.rb

注释掉#require_jar( ‘org.apache.logging.log4j’, ‘log4j-1.2-api’, ‘2.6.2’ ),然后重启logstash

2.kibana查看数据
点击Discover,然后指定好日期,就可以查看刚刚log4j发送的日志了,如下图所示:

可以指定关键字进行搜索,指定时间范围,指定刷新间隔时间等,更多的功能可以自行去摸索。

The components of this stack are:

  • Elasticsearch, to store and search our logs.
  • Logstash, to ingest logs from our containers and write them to Elasticsearch.
  • Kibana, as a web UI to search the logs we store in Elasticsearch.
  • Consul, acting a service catalog to support discovery
  • ContainerPilot, to help with service discovery and bootstrap orchestration
  • Triton, Joyent's container-native infrastructure platform.
  • Nginx, acting as a source of logs for testing.

Diagram of Triton-ELK architecture

The ELK stack components all have configuration files that expect hard-coded IP addresses for their dependencies; Kibana and Logstash need an IP for Elasticsearch and Elasticsearch nodes need the IP of one other node to bootstrap clustering. Each container has a startup script configured as a ContainerPilot preStart handler to update the config file prior to starting the main application.

Additionally, the ELK application expects certain indexes to be created in Elasticsearch. When the Kibana application starts, the preStart handler script (manage.sh) will write these indexes to Elasticsearch and will send a log entry to Logstash so that the Logstash application can create its initial schema.

Getting started

  1. Get a Joyent account and add your SSH key.
  2. Install the Docker Toolbox (including docker and docker-compose) on your laptop or other environment, as well as the Joyent Triton CLI (triton replaces our old sdc-* CLI tools)
  3. Configure Docker and Docker Compose for use with Joyent:
curl -O https://raw.githubusercontent.com/joyent/sdc-docker/master/tools/sdc-docker-setup.sh && chmod +x sdc-docker-setup.sh
./sdc-docker-setup.sh -k us-east-1.api.joyent.com <ACCOUNT> ~/.ssh/<PRIVATE_KEY_FILE>

Check that everything is configured correctly by running ./test.sh check. If it returns without an error you're all set. This script will create and _env file that includes the Triton CNS name for the Consul service.

Start the stack

You can run the entire application with just Docker Compose:

yum install git docker
git clone https://github.com/autopilotpattern/elk
$ docker-compose -p elk up -d Creating elk_consul_1 Creating elk_elasticsearch_master_1 Creating elk_elasticsearch_1 Creating elk_elasticsearch_data_1 Creating elk_logstash_1 Creating elk_kibana_1 $ docker-compose -p elk ps Name Command State Ports -------------------------------------------------------------------------------- elk_consul_1 /bin/start -server Up 53/tcp, 53/udp, -bootst ... 8300/tcp, 8301/tcp, 8301/udp, 8302/tcp, 8302/udp, 8400/tcp, 0.0.0.0:8500->8500/tcp elk_elasticsearch_1 /bin/containerpilot Up 9200/tcp, 9300/tcp /usr/share/elastic... elk_elasticsearch_data_1 /bin/containerpilot Up 9200/tcp, 9300/tcp /usr/share/elastic... elk_elasticsearch_master_1 /bin/containerpilot Up 9200/tcp, 9300/tcp /usr/share/elastic... elk_kibana_1 /bin/containerpilot Up 0.0.0.0:5601->5601/tcp /usr/share/kibana... elk_logstash_1 /bin/containerpilot Up 0.0.0.0:12201->12201/tcp, /usr/share/logstas... 0.0.0.0:12201->12201/udp 24224/tcp, 0.0.0.0:514->514/tcp, 0.0.0.0:514->514/udp

Within a few moments all components of the application will be registered in the Consul discovery service and will have found the other components they need. We can add new nodes to Elasticsearch just by running docker-compose -p scale <node type>=<number of nodes>.

Run the test rig

The test script included with this repo can open the Consul and Kibana web consoles on systems that support the opencommand (ex. OS X):

./test.sh show
Waiting for Consul...
Opening Consul console... Refresh the page to watch services register.
Waiting for Kibana to register as healthy...
Opening Kibana console.

This repo also includes a Docker Compose file for starting Nginx containers that are configured to use either the syslog or gelf log driver.

$ ./test.sh -p elk test syslog
Starting Nginx log source...
Pulling nginx_syslog (autopilotpattern/nginx-elk-demo:latest)...
latest: Pulling from autopilotpattern/nginx-elk-demo
...
Creating elk_nginx_syslog_1
Waiting for Nginx to register as healthy...

Opening web page.

HTTP requests that we send to Nginx will be logged and be visible in Kibana.

Screenshot of Kibana

OpenWAF简介

OpenWAF是第一个全方位开源的Web应用防护系统(WAF),他基于nginx_lua API分析HTTP请求信息。OpenWAF由行为分析引擎和规则引擎两大功能引擎构成。其中规则引擎主要对单个请求进行分析,行为分析引擎主要负责跨请求信息追踪。

规则引擎的启发来自modsecuritylua-resty-waf,将ModSecurity的规则机制用lua实现。基于规则引擎可以进行协议规范,自动工具,注入攻击,跨站攻击,信息泄露,异常请求等安全防护,支持动态添加规则,及时修补漏洞。

行为分析引擎包含基于频率的模糊识别,防恶意爬虫,人机识别等防探测模块,防CSRF,防CC,防提权,文件上传防护等防攻击模块,cookie防篡改,防盗链,自定义响应头,攻击响应页面等防信息泄露模块。

除了两大引擎之外,还包含统计,日志,攻击响应页面,接入规则等基础模块。除了已有的功能模块,OpenWAF还支持动态修改配置, 动态添加第三方模块,使得在不重启引擎中断业务的条件下,升级防护。

OpenWAF支持将上述功能封装为策略,不同的web application应用不同的策略来防护。将来还会打造云平台,策略还可分享供他人参考。

ELK简介

ELK是三个不同工具的简称,组合使用可以完成各种日志分析

Elasticsearch: 是一个基于 Apache Lucene(TM) 的开源搜索引擎,简单点说就是用于建立索引并存储日志的工具

Logstash: 是一个应用程序,它可以对日志的传输、过滤、管理和搜索提供支持。我们一般用它来统一对应用程序日志进行收集管理,提供Web接口用于查询和统计

Kibana: 用于更友好的展示分析日志的web平台,简单点说就是有图有真相,可以在上面生成各种各样的图表更直观的显示日志分析的成果

安装

ELK 的安装,网上有很多,这里只描述 docker 方式的部署

Elasticsearch

  1. 拉取 elasticsearch docker 镜像
    docker pull elasticsearch  
  1. 启动 elasticsearch 容器
    docker run -d --name openwaf_es elasticsearch  
  1. 获取 openwaf_es 地址
    docker inspect openwaf_es | grep IPAddress  
    得到地址为:192.168.39.17
    
    PS: elasticsearch 服务端口为 9200

Logstash

  1. 拉取 logstash docker 镜像
    docker pull logstash
  1. 启动 logstash 容器
    docker run -it --name openwaf_logstash -v /root/logstash.conf:/usr/share/logstash/config/logstash.conf logstash -f /usr/share/logstash/config/logstash.conf
PS:
    /root/logstash.conf 文件内容如下:
    udp {                  # udp 服务配置
        port => 60099      # 表示日志服务器监听在 60099 端口
        codec => "json"    # 接收 json 格式信息
    }
    output {
        elasticsearch {
            hosts => ["192.168.39.17:9200"] # elasticsearch 的地址为 39.17,且端口为 9200
        }
    }
    上面的配置表示:openwaf 向 logstash 的 60099 端口,发送 udp 协议的 json 日志,然后 logstash 将其存入 Elasticsearch
  1. 获取 openwaf_logstash 地址
    docker inspect openwaf_logstash | grep IPAddress  
    得到地址为:192.168.39.18

Kibana

  1. 拉取 kibana docker 镜像
    docker pull kibana
  1. 启动 logstash 容器
    docker run -d --name openwaf_kibana -e ELASTICSEARCH_URL=http://192.168.39.17:9200 kibana
  1. 获取 openwaf_kibana 地址
    docker inspect openwaf_kibana | grep IPAddress  
    得到地址为:192.168.39.19
    
    PS: kibana 服务端口为 5601

OpenWAF配置

conf/twaf_default_conf.json 中 twaf_log 模块

    "twaf_log": {
        "sock_type":"udp",
        "content_type":"JSON",
        "host":"192.168.39.18",
        "port":60099,
        ...
    }

测试

测试版 OpenWAF 地址 192.168.36.44,反向代理后端服务器 192.168.39.216

现访问 192.168.36.44/?a=1 order by 1

访问结果如下:

拦截信息

此时,访问 192.168.39.19:5601,在 kibana 上查看日志

若第一次使用 kibana,需要生成一个索引,如下(使用默认):

kibana创建索引

kibana 日志显示如下:

kibana日志显示

 

博文最后更新时间:


评论

  • Merziuz

    FIZnhx http://pills2sale.com/ viagra online

  • Matthew

    I'm a trainee <a href="http://ebike4all.com/durvet-permethrin-10-label-li87">durvet permethrin 10 label</a> In December, competing robots will be set eight tasks to test their potential for use in emergency-response situations, including crossing uneven ground, using power tools and driving a rescue vehicle.

  • Gianna

    I'd like to cancel a cheque <a href="https://unef.edu.br/site/arginmax-gnc-dosage-wra6">arginmax gnc opiniones</a> The possibility of major regional reshuffling was in the air Sunday morning when Israel&#39;s former ambassador to the UN, Dan Gilerman, was asked about behind-the-scenes maneuverings in an interview on Israel Army Radio.

  • Randall

    How much does the job pay? <a href="https://www.jodelka.cze.pl/starting-strength-forum-beer-f331">nutrena forage extender</a> Enrique Santoyo, director general of the TGC egineering firm, said corruption was a problem but that roads were not prepared for such rare storms and that the geology of the earthquake-prone region made them vulnerable.

  • Patricia

    I'm sorry, I didn't catch your name <a href="http://diskes.karangasemkab.go.id/sulfamethoxazole-trimethoprim-800-160-mg-tablet-gc4a">trimethoprim and sulfamethoxazole</a> "One of the requirements of the legislation is that companies only collect the minimum amount of data that they require for a specific purpose. Firms are going to have to be much clearer about what data they are collecting and why."

  • Courtney

    I'd like some euros <a href="http://takaotile.vn/organic-india-ashwagandha-iherb-h7rb">ashwagandha for acne reddit</a> The company has invested heavily here, setting up clinical centers and research facilities in Tianjin and Shanghai. The drugmaker boasts that it has spent more than 1 billion yuan on research and development in China.

发表评论

博客统计

访问量:5264069

博文总数:750 评论总数:910100

原创126 翻译20 转载604