2017-10-20 97 views
0

我通过官方图像在Docker中运行ElasticStack;不过,我目前收到以下错误消息时,我尝试使用Logstash - 骨料插件具有相同请求ID的事件组合:Docker中的Logstash - 将2个事件合并为1个事件

Cannot create pipeline {:reason=>"Couldn't find any filter plugin named 'aggregate'. Are you sure this is correct? Trying to load the aggregate filter plugin resulted in this error: Problems loading the requested plugin named aggregate of type filter. Error: NameError NameError"}

不过,我也知道怎么不是100%使用Logstash - 骨料插件下面的示例中的事件组合成一个事件:

{ 
    "@t": "2017-10-16T20:21:35.0531946Z", 
    "@m": "HTTP GET Request: \"https://myapi.com/?format=json&trackid=385728443\"", 
    "@i": "29b30dc6", 
    "Url": "https://myapi.com/?format=json&trackid=385728443", 
    "SourceContext": "OpenAPIClient.Client", 
    "ActionId": "fd683cc6-9e59-427f-a9f4-7855663f3568", 
    "ActionName": "Web.Controllers.API.TrackController.TrackRadioLocationGetAsync (Web)", 
    "RequestId": "0HL8KO13F8US6:0000000E", 
    "RequestPath": "/api/track/radiourl/385728443" 
} 
{ 
    "@t": "2017-10-16T20:21:35.0882617Z", 
    "@m": "HTTP GET Response: LocationAPIResponse { Location: \"http://sample.com/file/385728443/\", Error: null, Success: True }", 
    "@i": "84f6b72b", 
    "Response": 
    { 
     "Location": "http://sample.com/file/385728443/", 
     "Error": null, 
     "Success": true, 
     "$type": "LocationAPIResponse" 
    }, 
    "SourceContext": "OpenAPIClient.Client", 
    "ActionId": "fd683cc6-9e59-427f-a9f4-7855663f3568", 
    "ActionName": "Web.Controllers.API.TrackController.TrackRadioLocationGetAsync (Web)", 
    "RequestId": "0HL8KO13F8US6:0000000E", 
    "RequestPath": "/api/track/radiourl/385728443" 
} 

可能有人请指导我如何正确地组合这些事件,如果骨料是正确的插件,为什么内置插件似乎不是Logstash Docker镜像的一部分?

泊坞窗,compose.yml内容:

version: '3' 
services: 
    elasticsearch: 
    image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3 
    container_name: elasticsearch 
    environment: 
     - discovery.type=single-node 
     - xpack.security.enabled=false 
    ports: 
     - 9200:9200 
    restart: always 
    logstash: 
    image: docker.elastic.co/logstash/logstash:5.6.3 
    container_name: logstash 
    environment: 
     - xpack.monitoring.elasticsearch.url=http://elasticsearch:9200 
    depends_on: 
     - elasticsearch 
    ports: 
     - 10000:10000 
    restart: always 
    volumes: 
     - ./logstash/pipeline/:/usr/share/logstash/pipeline/ 
    kibana: 
    image: docker.elastic.co/kibana/kibana:5.6.3 
    container_name: kibana 
    environment: 
     - xpack.monitoring.elasticsearch.url=http://elasticsearch:9200 
    depends_on: 
     - elasticsearch 
    ports: 
     - 5601:5601 
    restart: always 

logstash /管道/ empstore.conf内容:

input { 
    http { 
     id => "empstore_http" 
     port => 10000 
     codec => "json" 
    } 
} 

output { 
    elasticsearch { 
     hosts => [ "elasticsearch:9200" ] 
     id => "empstore_elasticsearch" 
     index => "empstore-openapi" 
    } 
} 

filter { 
    mutate { 
     rename => { "RequestId" => "RequestID" } 
    } 

    aggregate { 
     task_id => "%{RequestID}" 
     code => "" 
    } 
} 

回答

0

的代码在你的过滤器是必需设置。的代码

实施例:

  • Request_END:

    代码=> “地图[ 'sql_duration'] + = event.get( '持续时间')”

  • Request_START:

    代码=> “地图[ 'sql_duration'] = 0”

  • 再任务:

    代码=>“地图[‘sql_duration’] + = event.get(‘持续时间’)”

+0

所以没有办法简单地组合两个事件的现有字段没有去通过确定哪一个是请求和哪一个是响应的开销? – wdspider

+0

看这里: https://www.elastic.co/guide/en/logstash/current/plugins-filters-aggregate.html#plugins-filters-aggregate-example4 – lotfi1991

+0

code =>“ map ['country_name'] event.get('country_name') map ['towns'] || = [] map ['towns'] << {'town_name'=> event.get('town_name')} event.cancel( ) “ – lotfi1991