我不断收到此错误。我试图解析一个csv。文件。 我想知道如果我错过了一个库或其他东西。Logstash管道中止
我在Windows命令行中使用logstash.bat -f logstash.conf命令来运行此操作并获取此输出。
我使用rubydebug编解码器从日志
21:19:03.781 [main] INFO logstash.setting.writabledirectory - Creating directory {:setting=>"path.queue", :path=>"C:/Users/Public/logstash-5.2.1/data/queue"}
21:19:03.787 [LogStash::Runner] INFO logstash.agent - No persistent UUID file found. Generating new UUID {:uuid=>"0546332b-dc4d-4916-b5c6-7900d1fdd8a4", :path=>"C:/Users/Public/logstash-5.2.1/data/uuid"}
21:19:04.138 [[main]-pipeline-manager] ERROR logstash.agent - Pipeline aborted due to error {:exception=>#<LogStash::ConfigurationError: translation missing: en.logstash.agent.configuration.invalid_plugin_register>, :backtrace=>["C:/Users/Public/logstash-5.2.1/vendor/bundle/jruby/1.9/gems/logstash-filter-mutate-3.1.3/lib/logstash/filters/mutate.rb:178:in `register'", "org/jruby/RubyHash.java:1342:in `each'", "C:/Users/Public/logstash-5.2.1/vendor/bundle/jruby/1.9/gems/logstash-filter-mutate-3.1.3/lib/logstash/filters/mutate.rb:172:in `register'", "C:/Users/Public/logstash-5.2.1/logstash-core/lib/logstash/pipeline.rb:235:in `start_workers'", "org/jruby/RubyArray.java:1613:in `each'", "C:/Users/Public/logstash-5.2.1/logstash-core/lib/logstash/pipeline.rb:235:in `start_workers'", "C:/Users/Public/logstash-5.2.1/logstash-core/lib/logstash/pipeline.rb:188:in `run'", "C:/Users/Public/logstash-5.2.1/logstash-core/lib/logstash/agent.rb:302:in `start_pipeline'"]}
单线试图输出:
80,17-02-2017 18:28:31,56.000,45.000,0.000,2.000,0.000,44.000,55.000,57.000,50.000
从日志中的几行。
80,17-02-2017 18:28:31,56.000,45.000,0.000,2.000,0.000,44.000,55.000,57.000,50.000
80,17-02-2017 18:28:32,53.000,45.000,0.000,3.000,0.000,54.000,43.000,54.000,43.000
80,17-02-2017 18:28:33,56.000,45.000,0.000,2.000,0.000,45.000,51.000,43.000,50.000
80,17-02-2017 18:28:34,53.000,45.000,0.000,1.000,0.000,42.000,47.000,48.000,48.000
80,17-02-2017 18:28:35,59.000,45.000,0.000,2.000,0.000,45.000,59.000,39.000,48.000
80,17-02-2017 18:28:36,56.000,45.000,0.000,3.000,0.000,44.000,49.000,50.000,50.000
MY FILTER
filter {
csv {
columns => ["port", "timestamp", "tempcpuavg", "gputemp", "fanspeed", "gpuusage", "framerate", "tempcpu1", "tempcpu2", "tempcpu3", "tempcpu4"]
#80, 17-02-2017 18:28:31,56.000, 45.000, 0.000, 2.000, 0.000, 44.000, 55.000, 57.000, 50.000
separator => ","
skip_empty_columns => "true"
remove_field => ["message"]
}
mutate {
convert => ["port", "integer"]
convert => ["tempcpuavg", "double"]
convert => ["gputemp", "double"]
convert => ["fanspeed", "double"]
convert => ["gpuusage", "double"]
convert => ["framerate", "double"]
convert => ["tempcpu1", "double"]
convert => ["tempcpu2", "double"]
convert => ["tempcpu3", "double"]
convert => ["tempcpu4", "double"]
}
date {
match => [@timestamp", "MM-dd-YYYY HH:mm:ss"]
}
}
在解析您的筛选器日志文件错误发生的情况。你能粘贴几行日志和你配置的过滤器吗? – NutcaseDeveloper
嘿,我添加了一行,然后再添加一行。另外,为了清楚起见,我添加了日志行之间的额外行。 – ScipioAfricanus
我也添加了过滤器。谢谢。 – ScipioAfricanus