Spring Boot Curator and ELK - JSON Message Log
Application Server
One of the things I love about spring is
the simplicity of things. To create an application server that has a health and
metric rest interface is very simple. For the application server I use Spring
Boot (http://projects.spring.io/spring-boot/).
For adding the health and metrics I use the Spring Curator (https://github.com/spring-projects/spring-boot/tree/master/spring-boot-actuator).
Now that I have my application up and
running, I wan’t to monitor the application with a nice GUI with history. To do
this I use ELK – Elastic Search, LogStash and Kibana. Kibana is the GUI
interface. Elastic Search is the Database, and LogStash enters the data.
To implement this I could have added a mechanism
to send my health and metrics directly to Elastic Search but this would be a
solution specific implementation. I also do not want errors or checking of the
Elastic Search Status in my application to run. This is where LogStash comes in. All
I need to implement in my application is writing of the log files. Spring Boot
comes with slf4j built in (http://docs.spring.io/spring-boot/docs/current/reference/html/howto-logging.html).
Now all that I need is to send my health and metrics to different log files for
the LogStash to use.
Log Files
First step to create the log files is to
use logback as our log implementation. For that you need to add to your
resources folder the logback.xml file (http://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#boot-features-logback-extensions).
Here you can define the rolling of the log file and the format.
To get the log files into Elastic Search
you have two options. The first way is to configure the logstash to use parse
the log files of spring boot using grok (https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html).
Since we know that we will be using
LogStash it is better to have the log files formatted from the begging in json
format for LogStash. To do this we will use the LogStash encoder for logback.
For example:
<appender name="LogStashHealthFile"
class="ch.qos.logback.core.rolling.RollingFileAppender">
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp/>
<version/>
<threadName/>
<loggerName/>
<logLevel/>
<logLevelValue/>
<context/>
<arguments/>
<message/>
</providers>
</encoder>
<File>../log/log-stash-health.log</File>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- rollover daily -->
<fileNamePattern>../log/archive/log-stash-health.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
<maxHistory>14</maxHistory>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<!-- or whenever the file size reaches 5MB -->
<maxFileSize>5MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
</rollingPolicy>
</appender>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp/>
<version/>
<threadName/>
<loggerName/>
<logLevel/>
<logLevelValue/>
<context/>
<arguments/>
<message/>
</providers>
</encoder>
<File>../log/log-stash-health.log</File>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- rollover daily -->
<fileNamePattern>../log/archive/log-stash-health.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
<maxHistory>14</maxHistory>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<!-- or whenever the file size reaches 5MB -->
<maxFileSize>5MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
</rollingPolicy>
</appender>
To use the encoder you
need to add the following maven dependency:
<dependency> <groupId>net.logstash.logback</groupId> <artifactId>logstash-logback-encoder</artifactId> <version>4.5.1</version> </dependency>
This will create our
log file as a json format for LogStash.
The Message Issue
The issue that I had a
hard time with, and is the main reason for this blog is the json format of the
message. The logback encoder creates a json out of the log entry, where the
message is a field in the json. This field is recorded as a text field in the
json object and not as part of the full json object.
For example:
{
"@timestamp":
"2015-11-09T10:11:19.026+02:00",
"@version": 1,
"logger_name":
"com.clearforest.importer.log.health",
"level": "INFO",
"message":
"{\"status\":\"UP\",\"queue\":{\"status\":\"UP\",\"Queue
Name\":\"cmwell.files.ucpa\",\"Data
Directory\":\"..\\\\data\\\\ActiveMQ\"},\"CMWell\":{\"status\":\"UP\",\"host\":\"vgilad:9000\"},\"nrg\":{\"status\":\"UP\",\"lastDataReceived\":\"09.11.2015
10:11:18\",\"lastWriteDate\":\"unknown\",\"cachedMessages\":1},\"diskSpace\":{\"status\":\"UP\",\"free\":24298827776,\"threshold\":10485760}}"
}
For Kibana to work we
need the message of the log to be part of the full json object. To fix this
when using the log in our code we need to add the following import:
import net.logstash.logback.argument.StructuredArguments._
This adds the option to pass arguments to
the log file interface. The arguments are then added as a json object and not
as a string field. In the code you then use the log as follows:
val res = RequestFactory.getRestContent(s"http://localhost:$serverPort/metrics") meticLogger.info("",raw("metric",res))
and in the log file we will get
{
"@timestamp":
"2015-11-09T08:59:57.480+02:00",
"@version":
1,
"level":
"INFO",
"level_value":
20000,
"health":
{
"status":
"DOWN",
"queue":
{
"status":
"UP",
"Queue
Name": "files.abc",
"Data
Directory": "..\\data\\ActiveMQ"
},
"MyApp":
{
"status":
"DOWN",
"error":
"Connection refused: connect",
"host":
"abc:9000"
},
"diskSpace":
{
"status":
"UP",
"free":
24000569344,
"threshold":
10485760
}
}
}
No comments:
Post a Comment