CloudFoundry: Beyond the maintenance page, delivering a response during service unavailability

For most Cloud Foundry applications and services, you can avoid downtime for maintenance with a combination of following best practices for 12 factor app development and taking advantage of Cloud Foundry’s scaling and flexible routing to implement Blue-Green deployment.

But there will still be times where an application, service, or shared backend component does not support rolling upgrades, and a maintenance window will be required.

When this happens, you must think about the behavior you want to implement for both front-end user applications as well as services.  A static page may not adequately serve the needs of end users or consuming clients that rely on your services.

Continue reading “CloudFoundry: Beyond the maintenance page, delivering a response during service unavailability”

CloudFoundry: Using Blue-Green deloyment and route mapping for continuous deployment

One of the goals of Continuous Deployment (CD) is an automated deployment and patching process.  And as your  organization becomes more efficient at deployment and velocity increases, you want to ensure that availability is maintained while keeping risk to a minimum.

Cloud Foundry supports route mapping that allows you to stand up a new live version of a service, validate its functionality, and only then switch your users over to the new service – all transparent to the end user.  This type of deployment is typically referred to as Blue-Green deployment as popularized by Martin Fowler.

Standing up a new, live service into a production environment allows you to validate the service fully before switching users over.  And if there are issues discovered, reversion back to the older service is just a matter of switching the mapping back.

Continue reading “CloudFoundry: Using Blue-Green deloyment and route mapping for continuous deployment”

Java: Loading self-signed, CA, and SAN certificates into a Java Keystore

The JRE comes preloaded with a set of trusted root authorities, but if you are working with self-signed certificates, or SAN server certificates that were signed using your own Certificate Authority then you are going to need to add these certificates to your trusted keystore.

If your Java application attempts to communicate via TLS to a remote host that does not have a trusted chain of security you will get the all too famous “SSLHandshakeException: PKIX path building failed” exception.  At this point you have a couple of options:

  • Bypass all security checks by injecting a custom X509TrustManager that allows all communication
  • Add the certificates you want to trust into the TrustManager keystore

Bypassing security at this level is not a good idea.  It’s like telling your browser that you will never care about HTTPS certificates, so from now on just show the green icon in the address bar no matter what.    It is best to address security from day one, and not as a future feature.

In this article I will lead you through installing a self-signed as well as CA signed certificate into the Trust Manager keystore so that TLS communication to remote sites is handled correctly and securely.

Continue reading “Java: Loading self-signed, CA, and SAN certificates into a Java Keystore”

Ubuntu: Creating a self-signed SAN certificate using OpenSSL

There are numerous articles I’ve written  where a certificate is a prerequisite for deploying a piece of infrastructure.

This article will guide you through generating a self-signed certificate with SAN (Subject Alternative Name) and SAN wildcard entries, replacing the deprecated usage of CN=<FQDN>

In addition to the operational benefits of managing SAN, it is also becoming more necessary at the client level with browsers like Chrome 58 and Firefox 48 that don’t trust certificates without this specification.

If you just need a simple self-signed certificate where the Subject CN is sufficient to denote your public hostname, then read my article here instead.

If you manage a larger internal environment and want to create your own trusted Certificate Authority so you can provide trusted SAN certificates for multiple groups/services, then read my article here.  These also provide better support for full browser trust.

Continue reading “Ubuntu: Creating a self-signed SAN certificate using OpenSSL”

Ubuntu: Creating a trusted CA and SAN certificate using OpenSSL

There are numerous articles I’ve written  where a certificate is a prerequisite for deploying a piece of infrastructure.

This article will guide you through creating a trusted CA (Certificate Authority), and then using that to sign a server certificate that supports SAN (Subject Alternative Name). 

Operationally, having your own trusted CA is advantageous over a self-signed certificate because once you install the CA certificate on a set of corporate/development machines, all the server certificates you issue from that CA will be trusted.   If you manage a larger sized internal environment where hosts, services, and containers are in constant flux, this is an operational win.

CA trust also had advantages to self-signed certs because browsers like Chrome 58 and Firefox 48 have limitations on trusting self-signed certificates.   The Windows version of Chrome is the only flavor that allows self-signed certs to be imported as a trusted root authority, all other OS do not trust the self-signed certificate.  And Firefox allows you to add a permanent exception, but needs a trusted CA in order to show a fully green trust lock icon.

If you just want a self-signed SAN certificate with no backing CA, then read my article here instead, but note that it has limitations that are overcome by using a trusted CA.

Continue reading “Ubuntu: Creating a trusted CA and SAN certificate using OpenSSL”

Ubuntu: A centralized apt package cache using Apt-Cacher-NG

ubuntuIt is common in secure production datacenters for internal hosts to be forced to go through a reverse proxy for public internet access.  The same concept can be applied to apt package management, where setting up a centralized package proxy enables caching as well as security controls.

In a datacenter where you could have hundreds of host instances all needing the same package/kernel/security patch, having a cache of packages inside your network can save a significant amount of network bandwidth and operator time.

In this article, we’ll go through installation and configuration of Apt-Cacher-NG, a specialized reverse proxy for Debian based distributions that does whitelisting of repositories, precaching, and remapping to support caching for SSL repositories.

Continue reading “Ubuntu: A centralized apt package cache using Apt-Cacher-NG”

Ubuntu: A centralized apt package cache using squid-deb-proxy

ubuntuIt is common in secure production datacenters for internal hosts to be forced to go through a reverse proxy (e.g. Squid) for public internet access.  The same concept can be applied to apt package management, where setting up a centralized package proxy enables caching as well as security controls.

In a datacenter where you could have hundreds of host instances all needing the same package/kernel/security patch, having a cache of packages inside your network can save a significant amount of network bandwidth and operator time.

And just like an internet proxy that whitelists only specific domains, a package proxy can have a whitelist of apt repositories, as well as a blacklist of specific packages.

In this article we’ll go through installation and configuration of squid-deb-proxy, which is just a packaging of Squid3 with specific tunings for package caching.  Since most Security and Operations teams are familiar with Squid already, this makes it easier to get deployment approval versus other package caching solutions.

Continue reading “Ubuntu: A centralized apt package cache using squid-deb-proxy”

Git: Uploading an existing local project to GitHub from the console

Getting a local project into a public repository on GitHub only takes a few steps.  This is well documented on a GitHub help page.

In this article, I’ll go through getting a local project called “myproject1” into a public GitHub repository.

Continue reading “Git: Uploading an existing local project to GitHub from the console”

Maven: Installing a private Maven repository on Ubuntu using Apache Archiva

An essential part of the standard build process for Java applications is having a repository where project artifacts are stored.

Artifact curation provides the ability to manage dependencies, quickly rollback releases, support compatibility of downstream projects, do QA promotion from test to production, support a continuous build pipeline, and provides auditability.

Archiva from the Apache foundation is open-source and can serve as the repository manager for popular build tools such as Maven and Jenkins.

Continue reading “Maven: Installing a private Maven repository on Ubuntu using Apache Archiva”

Java: Using XMLAdapter to process custom datatypes using JAXB

JAXB provides a framework for two-way binding between XML and Java structures, making it easy for developers to serialize and deserialize their XML into Java objects.

Decorating a class with @XmlElement and @XmlAttribute annotations is all it usually takes to build a rich domain model from XML, but sometimes you get into a situation where more custom processing is required.

A custom XMLJavaTypeAdapter can help resolve these issues by providing custom methods for marshalling and unmarshalling the XML into the correct Java type.

Continue reading “Java: Using XMLAdapter to process custom datatypes using JAXB”

Squid: Enabling whitelisted FTP proxying using Squid

Having your production servers go through a proxy like Squid for internet access can be an architectural best practice that provides network security as well as caching efficiencies.

In a previous article, I showed how you can enforce whitelists for specific domains when using HTTP/HTTPS.  Now let’s do the same thing for FTP connections, proxying passive FTP connections through Squid, using explicit domain whitelists.

Continue reading “Squid: Enabling whitelisted FTP proxying using Squid”

Java: Determining the Java version used to compile a class, ‘class file has the wrong version’

If a Java class file is compiled with a higher supported version than is currently being run, you will get the ‘bad class file’, ‘class file has the wrong version XX.0, should be XX.0’ error message.

The good news is that this is a relatively simple issue to address.  For example, if the .class file was compiled as a Java 1.8 class file on a Jenkins continuous integration node, but the JRE on the desktop where you want to run it only has Java 1.7, then you will get this message when you attempt to run it.

Continue reading “Java: Determining the Java version used to compile a class, ‘class file has the wrong version’”

Java: Using Maven from the console for running classes, unit tests, checking dependencies

In this short article, I’ll provide some Maven commands that I’ve found helpful.

 

Run single class from src/main/java

mvn exec:java -Dexec.mainClass=this.is.MyClass -Dexec.args="myarg1 'my second arg' myarg3"

Run unit test from src/test/java, all methods decorated with @Test

mvn test -Dtest=this.is.MyTestClass

Run unit test from src/test/java, only methods decorated with @Test and that start with ‘testDatabase’

mvn test -Dtest=this.is.MyTestClass#testDatabase*

Continue reading “Java: Using Maven from the console for running classes, unit tests, checking dependencies”

Ubuntu: Testing the official released kernel patches for Meltdown CVE-2017-5754

ubuntuThe Meltdown vulnerability affects Intel and some ARM (but not AMD) processor chips and can allow unprivileged access to memory in the kernel and other processes.

Canonical has committed to kernel patches to address this issue and they are now available from the both the updates and security official Ubuntu repositories.

In this article, I’ll step through patching an Ubuntu kernel with the candidate kernel fixes.

Continue reading “Ubuntu: Testing the official released kernel patches for Meltdown CVE-2017-5754”

Ubuntu: Testing the first candidate kernel patches for Meltdown CVE-2017-5754

ubuntuThe Meltdown vulnerability affects Intel and some ARM (but not AMD) processor chips and can allow unprivileged access to memory in the kernel and other processes.

Canonical has committed to kernel patches to address this issue by January 9, 2018 and the first candidate kernel patches have now been released for Xenial and Trusty LTS.

UPDATE Jan 11 2018: The main Ubuntu repositories now have the official patches.  Read my article here for more information.

In this article, I’ll step through patching an Ubuntu 16.04 kernel with the candidate kernel fixes.

Continue reading “Ubuntu: Testing the first candidate kernel patches for Meltdown CVE-2017-5754”

Ubuntu: Testing the KAISER kernel patch for Meltdown CVE-2017-5754

ubuntuThe Meltdown vulnerability affects Intel and some ARM (but not AMD) processor chips and can allow unprivileged access to memory in the kernel and other processes.  Canonical has committed to kernel patches to address this issue by January 9, 2018.

A paper coming out of Graz University of Technology in Austria and written by Daniel Gruss, Moritz Lipp, Michael Schwarz, Richard Fellner, Clementine Maurice, and Stefan Mangard provides a patched 4.10.0 kernel that isolates the kernel address space and resolves CVE-2017-5754 (Meltdown).

No one is advocating this as the fix for your production instances, but if you want to play around with this patched kernel in a virtualized environment, I’ll lead you through the steps in this article.

UPDATE Jan 11 2018: The main Ubuntu repositories now have the official patches.  Read my article here for more information.

Continue reading “Ubuntu: Testing the KAISER kernel patch for Meltdown CVE-2017-5754”

Ubuntu: Determine system vulnerability for Meltdown CVE-2017-5754

ubuntuThe Meltdown vulnerability affects Intel and some ARM (but not AMD) processor chips and can allow unprivileged access to memory in the kernel and other processes.  Canonical has committed to kernel patches to address this issue by January 9, 2018.

If you need to check your system, or perhaps have already patched your systems but want to verify that the issue truly is resolved, there is a proof of concept available on github that exercises a rogue data cache load (Variant 3).

In this article I will show you how to compile and run this non-destructive C++ program on Ubuntu 14.04 and 16.04.

Continue reading “Ubuntu: Determine system vulnerability for Meltdown CVE-2017-5754”

Ubuntu: Determine system vulnerability for Spectre CVE-2017-5715 CVE-2017-5753

ubuntuThe Spectre vulnerability affects Intel, AMD, and ARM processor chips (each to various degrees) and can allow unprivileged access to memory in the kernel and other processes.  Canonical has committed to kernel patches to address this issue by January 9, 2018.

If you need to check your system, or perhaps have already patched your systems but want to verify that the issue truly is resolved, there is a simple proof of concept that exercises the bounds check bypass within the same process (Variant 1, CVE-2017-5753).

In this article I will show you how to compile and run this small, non-destructive C program that is included as Appendix A in the Spectre whitepaper.

Continue reading “Ubuntu: Determine system vulnerability for Spectre CVE-2017-5715 CVE-2017-5753”

CloudFoundry: Enabling Java JMX/RMI access for remote containers

Enabling the use of real-time JVM monitoring tools like jconsole and VisualVM can be extremely beneficial when troubleshooting issues.  These tools work by enabling a JMX/RMI communication channel to the JVM.

These are typically thought of as local development tools, but they can also be used on remote CF containers running Java.  In this article, I’ll show you how to enable JMX/RMI to your remote Cloud Foundry container.

Continue reading “CloudFoundry: Enabling Java JMX/RMI access for remote containers”

CloudFoundry: Java thread and heap dump analysis on remote containers

Java thread and heap dumps are valuable tools for troubleshooting local development,  but they can also be used on remote CF containers running a JVM.  In this article, we’ll go through various method of gathering this data from a Cloud Foundry container and then tools for analyzing this data.

Now matter how uniform your environments, whether using Cloud Foundry stemcells/containers, configuration management tools,  or Docker images, there are always real-world issues that show up only in certain environments (especially production!).  There are unique corner cases that get exposed by end user experimentation, unexpected thread locking,  generational memory issues,  etc… and thread and heap dump analysis tools can assist.

Continue reading “CloudFoundry: Java thread and heap dump analysis on remote containers”

CloudFoundry: Enabling Java remote debugging with Eclipse

Remote debugging of Java applications from an IDE can be essential when debugging difficult issues.  There is no reason to give this functionality up just because you are deploying to a container in Cloud Foundry.

In this article we’ll go over how to enable remote debugging from a local Eclipse IDE to a public CF provider like Pivotal CloudFoundry.

Continue reading “CloudFoundry: Enabling Java remote debugging with Eclipse”

CloudFoundry: Monitoring the spring-music webapp, Part 5

Cloud Foundry is an opinionated Platform-as-a-Service that allows you to manage applications at scale. This article is part of a series that explores different facets of a Cloud Foundry deployment using the spring-music project as an example.

This article is Part 5 of  a series on Cloud Foundry concepts:

In this particular article, we will look at application level monitoring of CF deployed applications using the New Relic Service Broker.  The New Relic product enables real-time monitoring of applications.

Continue reading “CloudFoundry: Monitoring the spring-music webapp, Part 5”

CloudFoundry: Logging for the spring-music webapp, Part 4

Cloud Foundry is an opinionated Platform-as-a-Service that allows you to manage applications at scale.  This article is part of a series that explores different facets of a Cloud Foundry deployment using the spring-music project as an example.

This article is Part 4 of  a series on Cloud Foundry concepts:

In this particular article, we will look at the Cloud Foundry log types, how to configure logback for spring-music, and then how to inject those events into a log pipeline.

Continue reading “CloudFoundry: Logging for the spring-music webapp, Part 4”

CloudFoundry: Exploring Cloud Foundry using the spring-music application

Cloud Foundry is an opinionated Platform-as-a-Service that allows you to manage applications at scale.  It supports multiple infrastructure platforms (EC2, VMware, OpenStack), and is able to standardize deployment, logging,  scaling, and routing in a way that is friendly to a continuous delivery pipeline.

In this series of articles, we will use the spring-music web application to explore Cloud Foundry features and concepts.

CloudFoundry: Scaling the spring-music webapp, Part 3

Cloud Foundry is an opinionated Platform-as-a-Service that allows you to manage applications at scale.  This article is part of a series that explores different facets of a Cloud Foundry deployment using the spring-music project as an example.

This article is Part 3 of  a series on Cloud Foundry concepts:

Specifically in this article, we will horizontally and vertically scale up the spring-music application and show how this affects the routing and logging.

Continue reading “CloudFoundry: Scaling the spring-music webapp, Part 3”

CloudFoundry: Persisting spring-music data using Postgres service, Part 2

Cloud Foundry is an opinionated Platform-as-a-Service that allows you to manage applications at scale.  This article is part of a series that explores different facets of a Cloud Foundry deployment using the spring-music project as an example.

This article is Part 2 of  a series on Cloud Foundry concepts:

In this particular article, we will create a Cloud Foundry Postgres service to externalize the persistent store instead of using the default in-memory H2 database which is destroyed every time the application is restarted or restaged.

Continue reading “CloudFoundry: Persisting spring-music data using Postgres service, Part 2”

CloudFoundry: PCF Dev for local development on Ubuntu

PCF Dev is a distribution of Cloud Foundry that has a minimal footprint and is designed to run locally on a developer’s machine.  Using this lightweight distribution of Cloud Foundry, a developer can debug and deploy applications locally.

In this article, we’ll go through the installation of PCF Dev on an Ubuntu development host.

Continue reading “CloudFoundry: PCF Dev for local development on Ubuntu”

CloudFoundry: Deploying the spring-music webapp, Part 1

Cloud Foundry is an opinionated Platform-as-a-Service that allows you to manage applications at scale.  It supports multiple infrastructure platforms, and is able to standardize deployment, logging,  scaling, and routing in a way friendly to a continuous delivery pipeline.

This article is Part 1 of  a series on Cloud Foundry concepts:

In this particular article, we will install the command line interface for Cloud Foundry on Ubuntu and then use that to deploy the Spring Boot based spring-music project to a CF provider.

Continue reading “CloudFoundry: Deploying the spring-music webapp, Part 1”

HAProxy: Zero downtime reloads with HAProxy 1.8 on Ubuntu 16.04 with Systemd

The reload functionality in HAProxy till now has always been “not perfect but good enough”, perhaps dropping a few connections under heavy load but within parameters everyone was willing to accept. And because of the potential impact, a reload was typically only done during non-peak traffic times.

But with the popularity of microservices, containerization, continuous deployment, and dynamically scalable architecture, it has become critical for our load balancers to provide zero downtime reloads because reloading can potentially happen every few seconds even during peak production load.

There have been some seminal pieces written on how to achieve this level of availability with HAProxy. Yelp Engineering wrote up how to use qdiscs to delay the SYN packets, then followed up with using a combination of Nginx and HAProxy communicating over unix sockets. An alternative solution used two instances of HAProxy with an iptables flip.

But now with the ability in HAProxy 1.8 to pass listening sockets from the old process, along with Linux kernel 3.9 support of SO_REUSEPORT we finally have a solution that doesn’t feel like an ingenious hack of the Linux kernel and networking stack.

Continue reading “HAProxy: Zero downtime reloads with HAProxy 1.8 on Ubuntu 16.04 with Systemd”