The open-source Zabbix monitoring solution has a published, simple binary protocol that allows you to send metrics to the Zabbix server without relying on the Zabbix Agent – which makes it very convenient for integration with other parts of your infrastructure.
In this article, I’ll show how to use the go-zabbix package for sending metrics to the Zabbix server. If instead you were looking to manipulate the backend server definitions (host, templates, hostgroups, etc.) using the REST API, then see my other article here.
Continue reading “Zabbix: Sending Zabbix metrics using a Go client”
The open-source Zabbix monitoring solution has a REST API that provides the ability for deep integrations with your existing monitoring, logging, and alerting systems.
This fosters development of community-driven modules like the py-zabbix Python module, which is an easy way to automate Zabbix as well as send/retrieve metrics.
Continue reading “Zabbix: Accessing Zabbix using the py-zabbix Python module”
ElasticSearch’s Metricbeat is a lightweight shipper of both system and application metrics that runs as an agent on a client host. That means that along with standard cpu/mem/disk/network metrics, you can also monitor Apache, Docker, Nginx, Redis, etc. as well as create your own collector in the Go language.
In this article we will describe installing Metricbeat 5.x on Ubuntu when the back end ElasticSearch version is either 5.x or 2.x.
Continue reading “ELK: Installing MetricBeat for collecting system and application metrics”
The ElasticSearch stack (ELK) is popular open-source solution that serves as both repository and search interface for a wide range of applications including: log aggregation and analysis, analytics store, search engine, and document processing.
Its standard web front-end, Kibana, is a great product for data exploration and dashboards. However, if you have multiple data sources including ElasticSearch, want built-in LDAP authentication, or the ability to annotate graphs, you may want to consider Grafana to surface your dashboards and visualizations.
Continue reading “Grafana: Connecting to an ElasticSearch datasource”
When building your logstash filter, you would often like to validate your assumptions on a large sampling of input events without sending all the output to ElasticSearch.
Using Logstash metrics and conditionals, we can easily show:
- How many input events were processed successfully
- How many input events had errors
- An error file containing each event that processed in error
This technique gives you the ability to track your success rate across a large input set, and then do a postmortem review of each event that failed.
I’ll walk you through a Logstash conf file that illustrates this concept.
Continue reading “Logstash: Using metrics to debug the filtering process”