Tags index with filebeat and logstash -
i use logstash-forwarder , logstash , create dinamic index tag configuration:
/etc/logstash/conf.d/10-output.conf
output { elasticsearch { hosts => "localhost:9200" manage_template => false index => "logstash-%{tags}-%{+yyyy.mm.dd}" } }
/etc/logstash-forwarder.conf
"files": [ { "paths": [ "/var/log/httpd/ssl_access_log", "/var/log/httpd/ssl_error_log" ], "fields": { "type": "apache", "tags": "mytag" } },
i convert configuration files filebeat in way:
/etc/filebeat/filebeat.yml
filebeat: prospectors: - paths: - /var/log/httpd/access_log input_type: log document_type: apache fields: tags: mytag
now in kibana index, instead of mytag
see beats_input_codec_plain_applied
i can see 2 problems mentioned in topic. let me summarize own benefit , other visitors struggling problem too.
- format add tag(s) in filebeat prospector (per prospector tags available since 5.0 or 1.2.3 a-j noticed) configuration
bad:
fields: tags: mytag
good:
fields: tags: ["mytag"]
however, there's more important issue
- tags getting concatenated. want tags array, if ship newly added tags logstash we'll see them being concatenated strings in es.
if adding 1 tag, workaround (as per hellb0y77) remove automatic tag filebeat adds, in logstash (central server side):
filter { if "beats_input_codec_plain_applied" in [tags] { mutate { remove_tag => ["beats_input_codec_plain_applied"] } } }
this not work if 1 wanted add multiple tags in filebeat.
one have make logstash split concatenated string , add each item tags. perhaps better in case, put tags on filebeat end custom field, not "tags" field , extract them custom field on logstash.
anyway, there seems no way make work changing filebeat configuration. way doing parsing on receiving logstash filter chain. see https://github.com/elastic/filebeat/issues/220
if can remove logstash solution you. when sending logs filebeat directly elasticsearch, tags appear in es expected.
Comments
Post a Comment