If you have worked on deploying packages via apt-get, you are probably familiar with a couple of forms of interruption during the package installation and upgrade process.
The first is the text menu shown during package upgrades that informs you that a new configuration file is available and asks if you want to keep your current one, use the new one from the package maintainer, or show the difference.
The second is the occasional ASCII dialog that interrupts the install/upgrade and ask for essential information before moving forward. The screenshot below is the dialog you get when installing MySQL or MariaDB, asking to set the initial root password for the database.
The problem, in this age of cloud scale, is that you often need completely silent installations and upgrades that can be pushed out via Configuration Management. Even if this is a build for an immutable image, you would prefer a completely automated construction process instead of manual intervention each time you build an image.
Continue reading “Ubuntu: Silent package installation and debconf”
As an exploration of AppDynamics’ APM functionality, you may find it useful to deploy a sample application that can quickly return back useful data. The Java Spring PetClinic connecting back to a MySQL database provides a simple code base that exercises both database and application monitoring.
We’ll deploy the Java Spring PetClinic unto Tomcat running on Ubuntu 14.04. MySQL will be the backing persistence engine for the web application. The AppDynamics Java agent will be loaded into the JVM running Tomcat, and the AppDynamics Database Agent will connect to MySQL for metrics gathering.
Continue reading “AppDynamics: Java Spring PetClinic and MySQL configured for monitoring”
The AppDynamics Machine Agent is used not only to report back on basic hardware metrics (cpu/memory/disk/network), but also as the hook for custom plugins that can report back on any number of applications including: .NET, Apache, AWS, MongoDB, Cassandra, and many others.
In this article, I’ll go over the details to install the Machine Agent unto an Ubuntu 14.04 system.
Continue reading “AppDynamics: Installing a Machine Agent on Ubuntu 14.04”
The ElasticSearch stack (ELK) is popular open-source solution that serves as both repository and search interface for a wide range of applications including: log aggregation and analysis, analytics store, search engine, and document processing.
Its standard web front-end, Kibana, is a great product for data exploration and dashboards. However, if you have multiple data sources including ElasticSearch, want built-in LDAP authentication, or the ability to annotate graphs, you may want to consider Grafana to surface your dashboards and visualizations.
Continue reading “Grafana: Connecting to an ElasticSearch datasource”
Zabbix is an open-source monitoring solution that provides metrics collection, dynamic indexes, alerting, dashboards, and an API for external integration. But graphing is arguably one Zabbix’s weak points; it still builds static images while other enterprise and consumer applications have set end users’ expectations for graph visualization and interactivity very high.
Luckily, the Zabbix plugin for Grafana can put a facelift on the valuable data stored in Zabbix. With this new data source, your end users can get the beautiful dashboard view they expect from a modern application.
Continue reading “Grafana: Connecting to a Zabbix datasource”
Grafana is an open-source visualization suite that is able to generate graphs and dashboards, in addition to alerting.
It is designed to retrieve data from various backends including: Graphite, ElasticSearch, Prometheus, and Zabbix.
This article will lead you through an installation of the latest stable version on Ubuntu 14.04.
Continue reading “Grafana: Installation on Ubuntu 14.04”
Until Zabbix3, trend data was not available via the Zabbix API. This meant that you could retrieve the raw values of a key over time, but not the aggregated historical trends of that value (e.g. CPU average over 5 minute intervals).
The only way to monitor trends was to look at the visual graph generated by Zabbix or query the underlying database directly. Meanwhile, graphs are arguably one of Zabbix’s weak points, especially given newer solutions like Grafana.
This was a major oversight in Zabbix2 functionality, and led to community patches that enabled this functionality in Zabbix 2.x. With this trend data now exposed, the community was free to write custom alerting, graphing, and capacity planning tools. For example, the Zabbix plugin for Grafana relies on this patch when the data source is Zabbix 2.x.
Continue reading “Zabbix: Enabling API fetch of Trend data in Zabbix2”
Having Zabbix send alert mails directly to user groups is typically outgrown as the system matures and the number of alerts increase, new lines of business and engineering groups are on-boarded, and on-call scheduling is implemented.
If you already use PagerDuty for on-call scheduling, then it makes perfect sense to have Zabbix create incidents in PagerDuty. While it is possible to use standard email to perform some level of integration, the native library is the tightest integration you will find and supports multiple pager duty services.
The agent built by PagerDuty is especially well done, using their API to automatically create PagerDuty incidents as well as automatically mark them resolved if the trigger is only ephemeral (e.g. a temporary cpu spike).
Continue reading “Zabbix: Alert to PagerDuty using Zabbix3”
As part of normal long-term operations, the number of kernel images on your system will accumulate and take up disk space. This issue with space will be even more pronounced if /boot is mounted to its own smaller partition.
With Ubuntu 16.04, ‘apt autoremove –purge’ and configuration of the unattended upgrades can ensure that old kernel images are cleaned, but if you are using Ubuntu 14.04 or need to manually purge, then the instructions below can lead you through the process.
Before removing this unnecessary baggage, the first step is to check what kernel version is currently being used and the installation state.
> uname -r
Continue reading “Unbutu: Removing unused kernel images and headers”
Oracle VirtualBox as a virtualization engine paired with Vagrant provides a cross-platform virtualization-agnostic workflow for Linux, Windows, and MacOS. It is light enough to allow a developer to setup, test, and tear down virtual infrastructure as part of a unit test.
You may find yourself in a position where you have built a VM in VirtualBox that you need to test in a lab running VMware vCloud Director. This may beg the question of, “Why not simply using the same script or process to rebuild the VM in the VMware lab?”
Perhaps the Vagrant box available in your VMware lab is not yet the latest version or OS flavor, maybe someone in the community constructed a Vagrant box for a software stack you have not yet scripted, or maybe internet access in the lab is limited to certain package repositories and you needed to reference a custom archive. Whatever the reason, you can export your VM from VirtualBox and import it into VMware if you are willing to jump through a few hoops.
Continue reading “VMware: Exporting from Oracle VirtualBox/Vagrant to vCloud Director”
The ELK stack (ElasticSearch-Logstash-Kibana), is a horizontally scalable solution with multiple tiers and points of extension and scalability.
Because so many companies have adopted the platform and tuned it for their specific use cases, it would be impossible to enumerate all the novel ways in which scalability and availability had been enhanced by load balancers, message queues, indexes on distinct physical drives, etc… So in this article I want to explore the obvious extension points, and encourage the reader to treat this as a starting point in their own design and deployment.
Continue reading “ELK: Architectural points of extension and scalability for the ELK stack”
The heart of the ELK stack is Elasticsearch. In order to provide high availability and scalability, it needs to be deployed as a cluster with master and data nodes. The Elasticsearch cluster is responsible for both indexing incoming data as well as searches against that indexed data.
As described in the documentation, if there is one absolutely critical resource it is memory. Keeping the heap size less than 32G will allow you to use compressed object pointers which is preferred. Swapping memory takes a big hit, so minimize swappiness on your Linux host.
Continue reading “ELK: Scaling an ElasticSearch Cluster”
The most varied point in an ELK (Elasticsearch-Logstash-Kibana) stack is the mechanism by which custom events and logs will get sent to Logstash for processing.
Companies running Java applications with logging sent to log4j or SLF4J/Logback will have local log files that need to be tailed. Applications running in containers may send everything to stdout/stderr, or have drivers for sending this on to syslog and other locations. Network appliances tend to have SNMP or remote syslog outputs.
But regardless of the details, events must flow from their source to the Logstash indexing layer. Doing this with maximized availability and scalability, and without putting excessive pressure on the Logstash indexing layer is the primary concern of this article.
Continue reading “ELK: Feeding the logging pipeline”
If you are running an Ubuntu server for any extended period of time, security issues will arise that affect the kernel, distribution, or packages installed on that host.
While there are always minimal risks associated with automatically applying security fixes, I feel those are dwarfed by the risks of running hosts that have known security flaws. For example, a media frenzy over the OpenSSL vulnerability Heartbleed may have forced administrators the world over to go out and manually patch their fleet of Linux hosts, but the truth is there is a constant stream of public vulnerabilities.
Expecting system administrators to manually patch each of these (in addition to their other daily tasks) is unrealistic, and therefore Ubuntu provides a simple way of scheduling unattended security updates.
First, install the unattended-upgrades package:
> sudo apt install unattended-upgrades
Continue reading “Ubuntu: Unattended Upgrades for security patches”
Although the ELK stack has rich support for clustering, clustering is not supported over WAN connections due to Elasticsearch being sensitive to latency. There are also practical concerns of network throughput given how much data some installations index on an hourly basis.
So as nice as it would be to have a unified, eventually consistent cluster span across your North America and European datacenters, that is not currently a possibility. Across availability zones in the same AWS datacenter will work, but not across different regions.
But first let’s consider why we want a distributed Elasticsearch cluster in the first place. It is not typically for geo failover or disaster recovery (because we can implement that separately in each datacenter), but more often because we want end users to have a federated search experience.
We want end users to go to a single Kibana instance, regardless of which cluster they want to search, and be able to execute a search query against the data. A Tribe node can bridge two distinct clusters for this purpose.
Continue reading “ELK: Federated Search with a Tribe node”
Kibana is the end user web application that allows us to query Elasticsearch data and create dashboards that can be used for analysis and decision making.
Although Kibana can be pointed to any of the nodes in your Elasticsearch cluster, the best way to distribute requests across the nodes is to use a non-master, non-data Client node. Client nodes have the following properties set in elasticsearch.yml:
Continue reading “ELK: Pointing Kibana to a Client Node”
SaltStack has the ability to create custom states, grains, and external pillars. There is a long list of standard external pillars ranging from those which read from local JSON files, to those that pull from EC2, MongoDB, etcd, and MySQL.
In this article, we will use Apache ZooKeeper as the storage facility for our SaltStack pillar data. ZooKeeper is used extensively for configuration management and synchronization of distributed applications, so it makes sense that it could serve as a central repository for pillar data.
Continue reading “SaltStack: Creating a ZooKeeper External Pillar using Python”
It may be hard to imagine on the development side, but there are instances where a deployed host is not accessible from the Salt Master in a production environment. This forces a bit of creativity if you have a set of standard formulas you need to apply to the host.
For instance, imagine a host sitting in a highly restricted DMZ network. Even with the advent of Salt SSH for minionless administration, SSH access may only be opened from a jumpbox and not the Salt Master itself. In cases like this, a Masterless Minion is a viable alternative.
Continue reading “SaltStack: Running a masterless minion on Ubuntu”
Python is a language whose advantages are well documented, and the fact that it has become ubiquitous on most Linux distributions makes it well suited for quick scripting duties.
In this article I’ll go through an example of using Python to read entries from a JSON file, and from each of those entries create a local file. We’ll use the Jinja2 templating language to generate each file from a base template.
Our particular example will be the generation of Logstash filters for log processing, but the techniques for using JSON to drive Python processing or Jinja2 templating within Python are general purpose.
Continue reading “Python: Using Python, JSON, and Jinja2 to construct a set of Logstash filters”
For full instructions on installing the AppDynamics Controller on Linux, see the official documentation. However, when you get to the step for installing in silent mode, it can be confusing because although it shows you how to specify the path to a response file and the keys available, it does not give you a sample file.
./controller_64bit_linux.sh -q -c -varfile /home/user/response.varfile
One way to generate a sample file that matches the responses you want in production is to manually install the controller in a development environment first. If you run the installer:
Continue reading “AppDynamics: Silent Install of Controller on Ubuntu and license directory”
Decompiling Java classes is sometimes associated with dubious behavior around proprietary and licensed software, but in reality there are many valid reasons why one may find it necessary to dig into Java class files and jar/war archives. It can be as simple as your development team no longer having the 2 year old version of the code deployed in production.
We’ll go over a couple of ways to decompile Java classes on an Ubuntu desktop.
Continue reading “Ubuntu: Decompiling Java classes on Ubuntu using Eclipse and JD-GUI”
The Dirty COW vulnerability affects the kernel of most base Ubuntu versions. Especially when running an Ubutu HWE stack, it can be a bit confusing to determine if your kernel and Ubuntu version are affected.
If you need to validate patching, then you can use a simple C program to exercise this read-only write vulnerability and check your system.
Continue reading “Ubuntu: Determine system vulnerability for Dirty COW CVE-2016-5195”
When automating software and infrastructure, it is not uncommon to need to supply a user id and password for installation or other operations. While it is certainly possible to pass these plaintext credentials directly in the state, this is not best practice.
# not best practice!!!
- name: frank
- password: "test3rdb"
- host: localhost
There are several issues with this approach.
Continue reading “SaltStack: Keeping Salt Pillar data encrypted using GPG”
When using jinja2 for SaltStack formulas you may be surprised to find that your global scoped variables do not have ability to be modified inside a loop. Although this is counter intuitive given the scope behavior of most scripting languages it is unfortunately the case that a jinja2 globally scoped variable cannot be modified from an inner scope.
As an example of a solution that will not work, let’s say you have a global flag ‘foundUser’ set to False, then want to iterate through a group of users, and if a condition is met inside the loop, then ‘foundUser’ would be set to True.
Continue reading “SaltStack: Setting a jinja2 variable from an inner block scope”
SLF4J, the Simple Logging Facade for Java, is a popular front for various logging backends, one of the being Logback. With the advent of containerization, using syslog to send data to remote logging infrastructure has become a popular transport method.
Enable Syslog Input
The first step is to enable the receipt of syslog messages. This could be any server listening for syslog messages. You can follow my previous article on configuring an Ubuntu server to receive RFC5424 compatible messages or you can configure a syslog input in Logstash.
Continue reading “Syslog: Sending Java SLF4J/Logback to Syslog”
Logging has always been a critical part of application development. But the rise of OS virtualization, applications containers, and cloud-scale logging solutions has turned logging into something bigger that managing local debug files.
Modern applications and services are now expected to feed log aggregation and analysis stacks (ELK, Graylog, Loggly, Splunk, etc). This can be done a multitude of ways, in this post I want to focus on modifying log4j2 so that it sends directly to an rsyslog server.
Even though we focus on sending to an Ubuntu ryslog server in this post, this could be any entity listening for syslog traffic, such as Logstash.
Continue reading “Syslog: Sending Java log4j2 to rsyslog on Ubuntu”
SaltStack grains are used for relatively static information such as operating system, IP address, and other system properties. They are also useful for targeting minions, for example whether a system is part of dev/test/prod, or a flag on whether it falls under LifeScience or HIPAA regulation.
In this article we will implement a custom grain that determines whether a host is part of development, test, or production environment based on a simplistic naming scheme. This custom grain will be written in Python.
Continue reading “SaltStack: Creating a Custom Grain using Python”
The prevalence of the long chains of firewall and reverse proxy solutions present in production infrastructure (and made even more popular with the dynamic routing introduced with containers) has made analysis of the end-user side of the network exchange a critical tool in troubleshooting.
Fiddler has long been a solid tool for both proxy capture as well as analysis of the end user application traffic on the Windows platform. However, troubleshooting issues with customers always required them to first install the tool on their desktop, and at times corporate policies would prevent installation.
Now, with the built-in capabilities of the Chrome DevTools and Firefox Network Monitor, the capture can happen directly from the end user’s browser without any external tool installation. If that session needs to be analyzed by higher level support resources, it can be exported as an HTTP Archive (HAR), and then imported into Fiddler for analysis at a later time.
And since the release of Fiddler for Linux, the analysis of the HAR can be done directly on the Ubuntu desktop.
Continue reading “Ubuntu: Using Fiddler to analyze Chrome/Firefox network capture”
If you installed (or upgraded to) a later Ubuntu point release: >= 12.04.2, >=14.04.2, or >=16.04.2, you may now be wondering why the system is warning you upon every login that you will no longer receive security updates.
WARNING: Security updates for your current Hardware Enablement Stack ended on 2016-08-04:
Although the first point releases of an Ubuntu version 12.04.0 and 12.04.1, 14.04.0 and 14.04.1, and 16.04.0 and 16.04.1 maintain support of their kernel version until the standard 5 year End-Of-Life for that long-term release (LTS), subsequent point releases do not hold the same schedule.
The reason why is that subsequent point releases ship with an updated kernel and X stack that require upgrade in order to maintain support. Referring to the support schedule above as an example, you can see that 14.04.3 was released with the Wily 15.04 Vivid HWE stack, and only supported for 12 months before requiring an upgrade to 14.04.5 and the Xenial 16.04 HWE.
Continue reading “Ubuntu: HWE Hardware Enablement Stacks, LTS, and the Kernel”