The Graylog blog

Filebeat to Graylog: Working with Linux Audit Daemon Log File

If you run the audit daemon on your Linux distribution you might notice that some of the most valuable information produced by auditd is not transmitted when you enable syslog forwarding to Graylog. By default, these messages are written to /var/log/audt/audit.log,  which is written to file by the auditd process directly and not sent via syslog. In this post, we will walk through the steps to capture this information and bring it into your Graylog instance, to get insight into what users do on your Linux servers. This is similar to our earlier blog post, “Back to Basics: Enhance Windows Security with Sysmon and Graylog”, but now for Linux.

Select a Log Shipper/Collector

In order to collect these important messages, you need to make an extra effort to fetch the file with a log collector, then transmit it to Graylog. Changing the default settings to send these additional messages using syslog is one option. However, information in these messages might be incomplete if those messages exceed the size limitations of syslog (1024kb). As these messages may include sensitive information and are security relevant, they should not be transferred in plain text over any kind of network. The collector should use a secured transport connection. The easiest way to get this up and running would be to use Elastic’s Filebeat and create a Beats input on the Graylog server. This Filebeat instance can be controlled by Graylog Collector-Sidecar or any kind of configuration management you already use. Of course, it is also possible to configure Filebeat manually if you are only collecting from a single host.

Deliver the Log File

By default, the auditd log file is owned by the user and group root and not accessible to any other user. The collector needs to run as root or needs to be added to the group “root” to have access to that log file.

Please check to make sure you do not violate any policies in your environment, running the collector as root is by far the simplest solution.

Filebeat 5.x

Like any other log file that should be transported with Filebeat, the best solution would be to use one prospector that includes the configuration specific for that file. More details can be found in the Filebeat documentation

filebeat:
prospectors:
– encoding: plain
fields:
collector_node_id: c00010.lan
type: auditd
ignore_older: 0
paths:
– /var/log/audit/audit.log
scan_frequency: 10s
tail_files: true
type: log
output:
logstash:
hosts:
– graylog001.lan:5044
– graylog002.lan:5044
– graylog003.lan:5044
loadbalance: true
#
# to enhance security of this sensitive data, enable client certificates
# and certificate verification
# ssl.certificate_authorities: [“/etc/ca.crt”]
# ssl.certificate: “/etc/client.crt”
# ssl.key: “/etc/client.key”
# ssl.verification_mode: trueFilebeat 6.x

In version 6, Filebeat introduced the concept of modules. Modules are designed to work in an Elastic Stack environment and provide pre-built parsers for logstash and dashboards for Kibana. However, since Graylog does the parsing, analysis and visualization in place of Logstash and Kibana, neither of those two components apply.

They also create a dedicated index in Elasticsearch, but Graylog also manages all indices in Elasticsearch so, for most Graylog users, these modules are of little benefit.

The configuration file settings stay the same with Filebeat 6 as they were for Filebeat 5.

Graylog  Collector-Sidecar

Use the Collector-Sidecar to configure Filebeat if you run it already in your environment. Just add a new configuration and tag to your configuration that include the audit log file. Keep in mind to add type auditd to the configuration, so that the rules below will work.

Create Beats Input

Create a Beats input in Graylog. When you communicate only in your trusted Network this does not need to be secured but, depending on the nature of the content of the messages, you might want to protect them with TLS.

Make use of your own certification authority and create certificates for the Graylog Input that can be verified by Filebeat when it connects to the input.

In addition, you could create client certificates that Graylog will accept messages only from Clients that authenticate via certificate.

Graylog and the collector would need to have their specific certificate and the certification authority certificate for verification of the certificates.

Process the Messages

How you choose to process auditd log messages depends entirely upon your needs, but we recommend you start by extracting all information into separate fields and normalize them. The following rules should be placed in a processing pipeline rule that can be the start to your processing and enrichment.

Rules

rule “auditd_identify_and_tag”

// we use only one rule to identify if this is an auditd log file
// in all following rules it is possible to check just this single field.
//
// following rules can just check for:
//   has_field(“is_auditd”)

when

// put any identifier you have for the auditd log file
// in this rule

has_field(“facility”) AND
to_string($message.facility) == “filebeat” AND

//
// the following rule only work if the auditd log file is
// in the default location

//
// has_field(“file”) AND
// to_string($message.file) == “/var/log/audit/audit.log” AND

// you need to adjust that if you change the field in the collector configuration!
has_field(“type”) AND
to_string($message.type) == “auditd”

then

set_field(“is_auditd”, true);

end

rule “auditd_kv_ex_prefix”

when

has_field(“is_auditd”)

then

// extract all key-value from “message” and prefix it with auditd_
set_fields(
fields:
key_value(
value: to_string($message.message),
trim_value_chars: “””
)

Get the Monthly Tech Blog Roundup

Subscribe to the latest in log management, security, and all things Graylog Blog delivered to your inbox once a month.