This fosters development of community-driven modules like the py-zabbix Python module, which is an easy way to automate Zabbix as well as send/retrieve metrics.
By nature, the amount of data collected in your ElasticSearch instance will continue to grow and at some point you will need to prune or warehouse indexes so that your active collections are prioritized.
ElasticDump can assist in moving your indexes either to a distinct ElasticSearch instance that is setup specifically for long term data, or exporting the data as json for later import into a warehouse like Hadoop. ElasticDump does not have a special filter for time based indexes (index-YYYY.MM.DD), so you must specify exact index names.
In this article we will use Python to query a source ElasticSearch instance (an instance meant for near real-time querying, keeps minimal amount of data), and exports any indexes from the last 14 days into a target ElasticSearch instance (an instance meant for data warehousing, has more persistent storage and users expect multi-second query times).
SaltStack has the ability to create custom states, grains, and external pillars. There is a long list of standard external pillars ranging from those which read from local JSON files, to those that pull from EC2, MongoDB, etcd, and MySQL.
In this article, we will use Apache ZooKeeper as the storage facility for our SaltStack pillar data. ZooKeeper is used extensively for configuration management and synchronization of distributed applications, so it makes sense that it could serve as a central repository for pillar data.
In this article I’ll go through an example of using Python to read entries from a JSON file, and from each of those entries create a local file. We’ll use the Jinja2 templating language to generate each file from a base template.
Our particular example will be the generation of Logstash filters for log processing, but the techniques for using JSON to drive Python processing or Jinja2 templating within Python are general purpose.
SaltStack grains are used for relatively static information such as operating system, IP address, and other system properties. They are also useful for targeting minions, for example whether a system is part of dev/test/prod, or a flag on whether it falls under LifeScience or HIPAA regulation.
In this article we will implement a custom grain that determines whether a host is part of development, test, or production environment based on a simplistic naming scheme. This custom grain will be written in Python.