--- /dev/null
+Example configuration for logstash/elasticsearch
+================================================
+
+So you've got all these RADIUS logs, but how do you analyse them? What is the
+easiest way to query the logs, find out when a client connected or
+disconnected, or view the top ten clients logging into the system over the last
+six hours?
+
+The logstash/elasticsearch/kibana stack is designed and built to do just that.
+elasticsearch is a search engine; logstash is commonly used to feed data in,
+and kibana the web interface to query the logs in near real time.
+
+Installing the ELK stack is beyond the scope of this document, but can be done
+in a short amount of time by any competent sysadmin. Then comes getting the
+logs in.
+
+This directory contains the following files as a starting point for feeding
+RADIUS logs into elasticsearch via logstash.
+
+Files
+-----
+
+Please note that all files should be reviewed before use to determine if they
+are suitable for your configuration/system.
+
+radius-mapping.sh
+
+ Each elasticsearch index needs a mapping to describe how fields are stored.
+ If one is not provided then all is not lost as elasticsearch will build one
+ on the fly. However, this may not be optimal, especially for RADIUS data, as
+ all fields will be analyzed making some visualisations hard or impossible
+ (such as showing top N clients).
+
+ This shell script (which just runs curl) pushes a template mapping into the
+ elasticsearch cluster.
+
+
+radius.conf
+
+ A sample configuration file for logstash that parses RADIUS 'detail' files.
+ It processes these by joining each record onto one line, then splitting the
+ tab-delimited key-value pairs out.
+
+ The file will need to be edited at least to set the input method. For
+ experimentation the given input (stdin) may be used. If logstash is running on
+ the RADIUS server then 'file' input may be appropriate, otherwise a different
+ input such as log-courier or logstash-forwarder may be better to get the data
+ over the network to the logstash server.
+
+
+See also
+--------
+
+elasticsearch web site: http://www.elastic.co/
+
+
+
+Matthew Newton
+April 2015
+
--- /dev/null
+# logstash configuration to process RADIUS detail files
+#
+# Matthew Newton
+# February 2014
+#
+# RADIUS "detail" files are textual representations of the RADIUS
+# packets, and are written to disk by e.g. FreeRADIUS. They look
+# something like the following, with the timestamp on the first
+# line then all attributes/values tab-indented.
+#
+# Tue Mar 10 15:32:24 2015
+# Packet-Type = Access-Request
+# User-Name = "test@example.com"
+# Calling-Station-Id = "01-02-03-04-05-06"
+# Called-Station-Id = "aa-bb-cc-dd-ee-ff:myssid"
+# NAS-Port = 10
+# NAS-IP-Address = 10.9.0.4
+# NAS-Identifier = "Wireless-Controller-1"
+# Service-Type = Framed-User
+# NAS-Port-Type = Wireless-802.11
+#
+# This filter processes the detail file such that each attribute
+# is stored as a separate field in the output document.
+
+
+#input {
+# stdin {
+# type => radiusdetail
+# }
+#}
+
+
+filter {
+
+ if [type] == "radiusdetail" {
+
+ # join all lines of a record together
+ multiline {
+ pattern => "^[^\t]"
+ negate => true
+ what => "previous"
+ }
+
+ # pull off the timestamp
+ grok {
+ match => [ "message", "^(?<timestamp>[^\t]+)\t" ]
+ }
+
+ # create the timestamp field
+ date {
+ match => [ "timestamp", "E MMM dd HH:mm:ss yyyy" ]
+ }
+
+ # split the attributes and values into fields
+ kv {
+ field_split => "\n"
+ source => "message"
+ trim => "\" "
+ trimkey => "\t "
+ }
+ }
+}
+
+output {
+ if [type] == "radiusdetail" {
+ elasticsearch {
+ host => localhost
+ protocol => http
+ cluster => elasticsearch
+ index_type => "detail"
+ index => "radius-%{+YYYY.MM.dd}"
+ flush_size => 1000
+ }
+ }
+}
+