Zabbix low-level discovery (LLD) provides a way to create an array of related items, triggers, or graphs without needing to know the exact number of entities up front.
The easiest way to populate the keys of a discovery item is to add a “UserParameter” in zabbix_agentd.conf, and then the Zabbix agent will invokes a script which returns the set of keys.
But the keys are only the first part of a real solution, because what you really want to send back are the values associated with those keys. For example, if you are monitoring a database, you don’t want to just send the list of tables available, you may want to send back each table name and then its row count and size on disk.
Unfortunately Zabbix does not support sending back multiple values [1,2,3,4]. There are various workarounds such as using one UserParameter for the discovery key and another with a UserParameter=key[*] to fetch each row of data, or using vfs.file.regexp to parse values that have been written to a file.
But I think the cleanest solution, and one that requires the minimal number of spawned processes on the agent host is to invoke zabbix_sender from inside the script to send back all the values you want to populate.
Continue reading “Zabbix: LLD low-level discovery returning multiple values”
The open-source Zabbix monitoring solution has a published, simple binary protocol that allows you to send metrics to the Zabbix server without relying on the Zabbix Agent – which makes it very convenient for integration with other parts of your infrastructure.
In this article, I’ll show how to use the go-zabbix package for sending metrics to the Zabbix server. If instead you were looking to manipulate the backend server definitions (host, templates, hostgroups, etc.) using the REST API, then see my other article here.
Continue reading “Zabbix: Sending Zabbix metrics using a Go client”
The open-source Zabbix monitoring solution has a REST API that provides the ability for deep integrations with your existing monitoring, logging, and alerting systems.
This fosters development of community-driven modules like the py-zabbix Python module, which is an easy way to automate Zabbix as well as send/retrieve metrics.
Continue reading “Zabbix: Accessing Zabbix using the py-zabbix Python module”
ElastAlert from the Yelp Engineering group provides a very flexible platform for alerting on conditions coming from ElasticSearch.
In a previous article I fully describe running interactively on an Ubuntu server, and now I’ll expand on that by running it at system startup using a System-V init script.
One of the challenges of getting ElastAlert to run as a service is that is has a very strict set of module requirements that very easily conflicts with other Python applications, and so we will use Python’s virtualenv to build it in isolation and then call that wrapper from the service script.
Continue reading “ELK: Running ElastAlert as a service on Ubuntu 14.04”
ElasticSearch’s Metricbeat is a lightweight shipper of both system and application metrics that runs as an agent on a client host. That means that along with standard cpu/mem/disk/network metrics, you can also monitor Apache, Docker, Nginx, Redis, etc. as well as create your own collector in the Go language.
In this article we will describe installing Metricbeat 5.x on Ubuntu when the back end ElasticSearch version is either 5.x or 2.x.
Continue reading “ELK: Installing MetricBeat for collecting system and application metrics”
ElasticSearch’s commercial X-Pack has alerting functionality based on ElasticSearch conditions, but there is also a strong open-source contender from Yelp’s Engineering group called ElastAlert.
ElastAlert offers developers the ultimate control, with the ability to easily create new rules, alerts, and filters using all the power and libraries of Python.
Continue reading “ELK: ElastAlert for alerting based on data from ElasticSearch”
Building services using Spring Boot gives a development team a jump start on many production concerns, including logging. But unlike a standard deployment where logging to a local file is where the developer’s responsibility typically ends, with Docker we must think about how to log to a public space outside our ephemeral container space.
The Docker logging drivers capture all the output from a container’s stdout/stderr, and can send a container’s logs directly to most major logging solutions (syslog, Logstash, gelf, fluentd).
As an added benefit, by making the logging implementation a runtime choice for the container, it provides flexibility to use a simpler implementation during development but a highly-available, scalable logging solution in production.
Continue reading “Docker: Sending Spring Boot logging to syslog”