Filebeat to Graylog: Working with Linux Audit Daemon Log File
If you run the audit daemon on your Linux distribution you might notice that some of the most valuable information produced by auditd is not transmitted when you enable syslog forwarding to Graylog. By default, these messages are written to /var/log/audt/audit.log, which is written to file by the auditd process directly and not sent via syslog. In this post, we will walk through the steps to capture this information and bring it into your Graylog instance, to get insight into what users do on your Linux servers. This is similar to our earlier blog post, “Back to Basics: Enhance Windows Security with Sysmon and Graylog”, but now for Linux.
Select a Log Shipper/Collector
In order to collect these important messages, you need to make an extra effort to fetch the file with a log collector, then transmit it to Graylog. Changing the default settings to send these additional messages using syslog is one option. However, information in these messages might be incomplete if those messages exceed the size limitations of syslog (1024kb). As these messages may include sensitive information and are security relevant, they should not be transferred in plain text over any kind of network. The collector should use a secured transport connection. The easiest way to get this up and running would be to use Elastic's Filebeat and create a Beats input on the Graylog server. This Filebeat instance can be controlled by Graylog Collector-Sidecar or any kind of configuration management you already use. Of course, it is also possible to configure Filebeat manually if you are only collecting from a single host.
Deliver the Log File
By default, the auditd log file is owned by the user and group root and not accessible to any other user. The collector needs to run as root or needs to be added to the group “root” to have access to that log file.
Please check to make sure you do not violate any policies in your environment, running the collector as root is by far the simplest solution.
Like any other log file that should be transported with Filebeat, the best solution would be to use one prospector that includes the configuration specific for that file. More details can be found in the Filebeat documentation
- encoding: plain
# to enhance security of this sensitive data, enable client certificates
# and certificate verification
# ssl.certificate_authorities: ["/etc/ca.crt"]
# ssl.certificate: "/etc/client.crt"
# ssl.key: "/etc/client.key"
# ssl.verification_mode: trueFilebeat 6.x
In version 6, Filebeat introduced the concept of modules. Modules are designed to work in an Elastic Stack environment and provide pre-built parsers for logstash and dashboards for Kibana. However, since Graylog does the parsing, analysis and visualization in place of Logstash and Kibana, neither of those two components apply.
They also create a dedicated index in Elasticsearch, but Graylog also manages all indices in Elasticsearch so, for most Graylog users, these modules are of little benefit.
The configuration file settings stay the same with Filebeat 6 as they were for Filebeat 5.
Use the Collector-Sidecar to configure Filebeat if you run it already in your environment. Just add a new configuration and tag to your configuration that include the audit log file. Keep in mind to add type auditd to the configuration, so that the rules below will work.
Create Beats Input
Create a Beats input in Graylog. When you communicate only in your trusted Network this does not need to be secured but, depending on the nature of the content of the messages, you might want to protect them with TLS.
Make use of your own certification authority and create certificates for the Graylog Input that can be verified by Filebeat when it connects to the input.
In addition, you could create client certificates that Graylog will accept messages only from Clients that authenticate via certificate.
Graylog and the collector would need to have their specific certificate and the certification authority certificate for verification of the certificates.
Process the Messages
How you choose to process auditd log messages depends entirely upon your needs, but we recommend you start by extracting all information into separate fields and normalize them. The following rules should be placed in a processing pipeline rule that can be the start to your processing and enrichment.
// we use only one rule to identify if this is an auditd log file
// in all following rules it is possible to check just this single field.
// following rules can just check for:
// put any identifier you have for the auditd log file
// in this rule
to_string($message.facility) == "filebeat" AND
// the following rule only work if the auditd log file is
// in the default location
// has_field("file") AND
// to_string($message.file) == "/var/log/audit/audit.log" AND
// you need to adjust that if you change the field in the collector configuration!
to_string($message.type) == "auditd"
// extract all key-value from "message" and prefix it with auditd_
// when auditd_msg is present we try to extract the epoch
// and the sequence number out with grok
// if the epoch was extracted successfully, create a human readable timestamp
// be aware that the milliseconds will be cut-off as a bug in the lib that is used
// the time zone might be adjusted to your wanted timezone, default UTC
// remove the unwanted field
Next, create a new processing pipeline with three stages. In the first stage, place the rule auditd_identify_and_tag in the second stage auditd_kv_ex_prefix and in the third auditd_extract_time_sequence. After this pipeline is connected to a stream of messages (System >Pipelines > Manage Pipelines > Edit > Edit Connections) it will start working. It should look similar to the following picture.
Continue the Work
The above enables you to view all entries that belong to the same sequence (search for auditd_log_sequence: $NUMBER ) and gives you the ability to get some nice overviews (QuickValue on auditd_type) on what is going on in your system. You can also develop your very own view on what happens on your systems.
For instance, one possibly useful bit of information you might want to monitor is what ciphers are used when connecting to the system (QuickValues on auditd_cipher or search _exists_:auditd_cipher)
As a final note,this blog post is not meant to be a definitive guide to monitoring the auditd log files, but it should enable you to get started. Upcoming changes in Graylog v3.0 will also simplify thread hunting and analysis of auditd log files will look deeper into that once it is released!