Cloud Foundry is an opinionated Platform-as-a-Service that allows you to manage applications at scale. This article is part of a series that explores different facets of a Cloud Foundry deployment using the spring-music project as an example.
This article is Part 4 of a series on Cloud Foundry concepts:
In this particular article, we will look at the Cloud Foundry log types, how to configure logback for spring-music, and then how to inject those events into a log pipeline.
Continue reading “CloudFoundry: Logging for the spring-music webapp, Part 4”
Building services using Spring Boot gives a development team a jump start on many production concerns, including logging. But unlike a standard deployment where logging to a local file is where the developer’s responsibility typically ends, with Docker we must think about how to log to a public space outside our ephemeral container space.
The Docker logging drivers capture all the output from a container’s stdout/stderr, and can send a container’s logs directly to most major logging solutions (syslog, Logstash, gelf, fluentd).
As an added benefit, by making the logging implementation a runtime choice for the container, it provides flexibility to use a simpler implementation during development but a highly-available, scalable logging solution in production.
Continue reading “Docker: Sending Spring Boot logging to syslog”
The Spring framework provides a proven and well documented model for the development of custom projects and services. The Spring Boot project takes an opinionated view of building production Spring applications, which favors convention over configuration.
In this article we will explore how to configure a Spring Boot project to use the Simple Logging Facade for Java (SLF4J) with a Logback backend to send log events to the console, filesystem, and syslog.
Continue reading “Spring: Spring Boot with SLF4J/Logback sending to syslog”
The most varied point in an ELK (Elasticsearch-Logstash-Kibana) stack is the mechanism by which custom events and logs will get sent to Logstash for processing.
Companies running Java applications with logging sent to log4j or SLF4J/Logback will have local log files that need to be tailed. Applications running in containers may send everything to stdout/stderr, or have drivers for sending this on to syslog and other locations. Network appliances tend to have SNMP or remote syslog outputs.
But regardless of the details, events must flow from their source to the Logstash indexing layer. Doing this with maximized availability and scalability, and without putting excessive pressure on the Logstash indexing layer is the primary concern of this article.
Continue reading “ELK: Feeding the logging pipeline”
SLF4J, the Simple Logging Facade for Java, is a popular front for various logging backends, one of the being Logback. With the advent of containerization, using syslog to send data to remote logging infrastructure has become a popular transport method.
Enable Syslog Input
The first step is to enable the receipt of syslog messages. This could be any server listening for syslog messages. You can follow my previous article on configuring an Ubuntu server to receive RFC5424 compatible messages or you can configure a syslog input in Logstash.
Continue reading “Syslog: Sending Java SLF4J/Logback to Syslog”
Content Packs are plugins that allow you you to create pre-packaged knowledge about specific event types.
For example, you can create a content pack that knows how to extract fields from one of your custom log sources. Beyond extracted fields, you can also add saved queries, aggregations, alerts, dashboards, and visualizations.
Incoming Events from Agent
First, let’s examine our sample log file on the agent side, in a file named /tmp/test.log.
2016-07-14 22:04:13.233 INFO com.my.myTest - [ 150] 200
Continue reading “vRealize Log Insight: Creating your own content pack for field extraction”