As part of normal long-term operations, the number of kernel images on your system will accumulate and take up disk space. This issue with space will be even more pronounced if /boot is mounted to its own smaller partition.
With Ubuntu 16.04, ‘apt autoremove –purge’ and configuration of the unattended upgrades can ensure that old kernel images are cleaned, but if you are using Ubuntu 14.04 or need to manually purge, then the instructions below can lead you through the process.
Before removing this unnecessary baggage, the first step is to check what kernel version is currently being used and the installation state.
> uname -r
Continue reading “Unbutu: Removing unused kernel images and headers”
In production data centers, it is not uncommon to have limited public internet access due to security policies. So while running ‘apt-get’ or adding a repository to sources.list is easy in your development lab, you have to figure out an alternative installation strategy because you need a process that looks the same across both development and production.
For some, building containers or images will satisfy this requirement. The container/image can be built once in development, and transferred as an immutable entity to production.
But for those that use automated configuration management such as Salt/Chef/Ansible/Puppet to layer components on top of a base image inside a restricted environment, there is a need to get binary packages to these guest OS without requiring public internet access.
There are several approaches that could be taken: using an offline repository or a tool such as Synaptic or Keryx or apt-mirror, but in this post I’ll go over using apt-get on an internet connected source machine to download the necessary packages for Apache2, and then running dpkg on the non-connected target machine to install each required .deb package and get a running instance of Apache2.
Continue reading “Ubuntu: Installing Packages without Public Internet Access”