For most Cloud Foundry applications and services, you can avoid downtime for maintenance with a combination of following best practices for 12 factor app development and taking advantage of Cloud Foundry’s scaling and flexible routing to implement Blue-Green deployment.
But there will still be times where an application, service, or shared backend component does not support rolling upgrades, and a maintenance window will be required.
When this happens, you must think about the behavior you want to implement for both front-end user applications as well as services. A static page may not adequately serve the needs of end users or consuming clients that rely on your services.
Continue reading “CloudFoundry: Beyond the maintenance page, delivering a response during service unavailability”
The open-source Zabbix monitoring solution has a REST API that provides the ability for deep integrations with your existing monitoring, logging, and alerting systems.
This fosters development of community-driven modules like the py-zabbix Python module, which is an easy way to automate Zabbix as well as send/retrieve metrics.
Continue reading “Zabbix: Accessing Zabbix using the py-zabbix Python module”
By nature, the amount of data collected in your ElasticSearch instance will continue to grow and at some point you will need to prune or warehouse indexes so that your active collections are prioritized.
ElasticDump can assist in moving your indexes either to a distinct ElasticSearch instance that is setup specifically for long term data, or exporting the data as json for later import into a warehouse like Hadoop. ElasticDump does not have a special filter for time based indexes (index-YYYY.MM.DD), so you must specify exact index names.
In this article we will use Python to query a source ElasticSearch instance (an instance meant for near real-time querying, keeps minimal amount of data), and exports any indexes from the last 14 days into a target ElasticSearch instance (an instance meant for data warehousing, has more persistent storage and users expect multi-second query times).
Continue reading “ELK: ElasticDump and Python to create a data warehouse job”
SaltStack has the ability to create custom states, grains, and external pillars. There is a long list of standard external pillars ranging from those which read from local JSON files, to those that pull from EC2, MongoDB, etcd, and MySQL.
In this article, we will use Apache ZooKeeper as the storage facility for our SaltStack pillar data. ZooKeeper is used extensively for configuration management and synchronization of distributed applications, so it makes sense that it could serve as a central repository for pillar data.
Continue reading “SaltStack: Creating a ZooKeeper External Pillar using Python”
Python is a language whose advantages are well documented, and the fact that it has become ubiquitous on most Linux distributions makes it well suited for quick scripting duties.
In this article I’ll go through an example of using Python to read entries from a JSON file, and from each of those entries create a local file. We’ll use the Jinja2 templating language to generate each file from a base template.
Our particular example will be the generation of Logstash filters for log processing, but the techniques for using JSON to drive Python processing or Jinja2 templating within Python are general purpose.
Continue reading “Python: Using Python, JSON, and Jinja2 to construct a set of Logstash filters”
SaltStack grains are used for relatively static information such as operating system, IP address, and other system properties. They are also useful for targeting minions, for example whether a system is part of dev/test/prod, or a flag on whether it falls under LifeScience or HIPAA regulation.
In this article we will implement a custom grain that determines whether a host is part of development, test, or production environment based on a simplistic naming scheme. This custom grain will be written in Python.
Continue reading “SaltStack: Creating a Custom Grain using Python”