Ubuntu: Extending a virtualized disk when using LVM

ubuntuIt is common for a virtualized Guest OS base image to have a generic minimal storage capacity.  But this capacity can easily be exceeded by production scenarios, performance testing, logging, or even the general cruft of running a machine 24×7.

In a previous post, I described extending a virtualized disk when using classic partitions.  In this post, I will perform the same task but with an LVM enabled system.  We will use console level tools so that it could be done from a remote terminal or by automation.

Continue reading “Ubuntu: Extending a virtualized disk when using LVM”

Ubuntu: Creating a Samba/CIFS share to quickly share files with Windows

ubuntuWe live in a multi-platform world, and the ability to easily share folders of content between users in the same protected network is a function made very convenient in the Windows world with CIFS shares (e.g. \\mydesktop\sharedfolder).

Luckily for Ubuntu users, it is pretty easy to setup CIFS shares to offer that same interoperability with Windows hosts on your network.  Start by installing the Samba components.

apt-get install samba -y

Continue reading “Ubuntu: Creating a Samba/CIFS share to quickly share files with Windows”

vRealize Log Insight: Creating your own content pack for field extraction

vmware_logo Content Packs are plugins that allow you you to create pre-packaged knowledge about specific event types.

For example, you can create a content pack that knows how to extract fields from one of your custom log sources.  Beyond extracted fields, you can also add saved queries, aggregations, alerts, dashboards, and visualizations.

Incoming Events from Agent

First, let’s examine our sample log file on the agent side, in a file named /tmp/test.log.

2016-07-14 22:04:13.233 INFO  com.my.myTest      - [  150] 200

Continue reading “vRealize Log Insight: Creating your own content pack for field extraction”

OpenWrt: Use setenv firmwareName for newer versions of Linksys WRT1900AC/S

openwrt_logoWhen flashing an OpenWrt image to your newer versioned WRT1900AC/S, be aware that instead of using ‘setenv firmware_name’, you should instead use ‘setenv firmwareName’.

The command will not fail, but the router will not understand that it should look for a non-default name for the image and your tftp transfer will fail.

This change appears to have been made between WRT1900AC V1 and WRT1900AC V2.  So, for the latest versions such as WRT1900ACS, be sure to use ‘setenv firmwareName’.

Ubuntu: Serial level access to your Linksys WRT1X00AC/S

ubuntuWhether you are updating the official LinkSys router firmware or taking it a step further and installing open-source firware like OpenWrt, serial level access to your Linksys router is the most dependable way of guaranteeing a connection.

And if you have tried to flash the firmware via the web admin interface and after a reboot you cannot get web access again, then you have no choice.  You have to be able to plug directly into the router’s serial interface and troubleshoot.

Continue reading “Ubuntu: Serial level access to your Linksys WRT1X00AC/S”

Ubuntu: Extending a virtualized disk using fdisk when not using LVM

ubuntuIt is common for a virtualized Guest OS base image to have a generic minimal storage capacity.  But this capacity can easily be exceeded by production scenarios, performance testing, logging, or even the general cruft of running a machine 24×7.

For this reason, extending a virtualized disk can be extremely helpful.  Here is a walk through for extending a disk using fdisk on an Ubuntu system that is using classic partitions.  For performing this operation with LVM enabled, see my post here.

This type of change is typically made with a live CD to ensure exclusive disk access and gparted GUI for convenience.  But we will use fdisk here so that it could be done from a remote terminal or by automation.

Continue reading “Ubuntu: Extending a virtualized disk using fdisk when not using LVM”

Logstash: Using metrics to debug the filtering process

elastic-logstash-fw When building your logstash filter, you would often like to validate your assumptions on a large sampling of input events without sending all the output to ElasticSearch.

Using Logstash metrics and conditionals, we can easily show:

  • How many input events were processed successfully
  • How many input events had errors
  • An error file containing each event that processed in error

This technique gives you the ability to track your success rate across a large input set, and then do a postmortem review of each event that failed.

I’ll walk you through a Logstash conf file that illustrates this concept.

Continue reading “Logstash: Using metrics to debug the filtering process”

Ubuntu: Using a swap file instead of swap partition for virtualized server VMs

ubuntuBefore virtualization, there was a stronger argument for using a swap partition instead of a swap file for servers.  A fragmented swap file could lead to performance issues that a statically sized and placed partition did not have consider.

But once virtualization comes into play, unless you go to great lengths to segment your storage pools, that swap partition is not guaranteed to be either statically sized or statically placed on a physical platter.  And at that point, you should consider using a swap file which provides more flexibility in sizing and capacity planning.

Here are instructions for adding a 16Gb swap file to Ubuntu:

Continue reading “Ubuntu: Using a swap file instead of swap partition for virtualized server VMs”

Ubuntu: Using pdftk to stitch together two-sided PDF

ubuntuThere are many consumer side printers that provide the ability to scan a document to PDF.  But unless you have a high-end series, the printer may only be capable of scanning one side at a time, which means you end up with a “front.pdf” and “back.pdf”.

If you have a Linux desktop or laptop, luckily the solution is as simple as calling ‘pdftk’.

Continue reading “Ubuntu: Using pdftk to stitch together two-sided PDF”

Logstash: Testing Logstash grok patterns online

elastic-logstash-fwIn my previous posts, I have shown how to test grok patterns locally using Ruby on Linux and Windows.  This works well when your VM do not have full internet access, or only have console access, or any reason that you want to test it locally.

If you have access to a graphical web browser and the log file, there is a nice online grok constructor here and here. and by simply entering a sampling of the log lines and a grok pattern, you can verify that all the lines are parsed correctly.

Here is a small example to start you off:

Continue reading “Logstash: Testing Logstash grok patterns online”

Logstash: Testing Logstash grok patterns locally on Windows

elastic-logstash-fwIf the logs you are shipping to Logstash are from a Windows OS, it makes it even more difficult to quickly troubleshoot a grok pattern being sent to the Logstash service.

It can be beneficial to quickly validate your grok patterns directly on the Windows host.  Here is an easy way to test a log against a grok pattern:

Continue reading “Logstash: Testing Logstash grok patterns locally on Windows”

Documentum: JMS Access Logs to Analyze Custom Method Load

LogoDocumentumA vital piece of information that often goes overlooked is the load created by standard and custom methods run on the Java Method Server.  In some applications (such as D2), the JMS is used extensively for application functionality and this can have performance implications to your end users.

You can capture this information by enabling the JMS access log, which is not enabled by default. Continue reading “Documentum: JMS Access Logs to Analyze Custom Method Load”

Documentum: Separating dfc.properties from your WAR

LogoDocumentumIn the world of microservices and containers, it is often desirable to keep settings such as those found in dfc.properties outside of the jar or war so that the deployment binary is the same no matter which environment it is deployed into.

The settings in dfc.properties can be externalized by specifying the location of dfc.properties in a JVM system property such as:

-Ddfc.properties.file=/tmp/dfc.properties

Sending SMTP Mail from Windows Using PowerShell

When working from the Windows command line, you can do a quick test to validate your SMTP connectivity using PowerShell:

 

c:\> Powershell -executionpolicy bypass

PS c:\> Send-MailMessage –to <TO> –from <FROM> –subject "testing123" –body "this is a test" –smtpserver <SMTPServer> -port 25

And if the mail server is accessed over TLS/SSL with SMTP authentication enabled:

PS c:\> Send-MailMessage –to <TO> –from <FROM> –subject "testing456" –body "this is a secure test" –smtpserver <SMTPServer> -port 587 -UseSsl -Credential (Get-Credential)

This is easier than going down to telnet, which is typically not installed on a modern Windows host: Continue reading “Sending SMTP Mail from Windows Using PowerShell”

Documentum: Ignoring Referrals from the LDAP Synch Job

icon-ldapThe most common way of integrating your existing Identity Management system with Documentum is to offer SSO (Single Sign-On) via the LDAP Synchronization job.

This requires that you set a Base DN for Documentum to search through, but it is not uncommon when dealing with real-world LDAP servers to have LDAP referrals in that search space. This is transparent, but it can cause performance issues, and even cause the job to timeout if the forwarded DNS name is not resolvable from the Content Server host.

Continue reading “Documentum: Ignoring Referrals from the LDAP Synch Job”

EMC OnDemand: Federated Identity Management and Silent SSO

Identity Management for On-Premise Applications

Our industry today has some very proven technologies for providing a single set of login credentials to applications installed on-premise.  Most commonly, companies use a central Identity Management system (e.g. Microsoft Active Directory/Oracle Internet Directory/IBM Tivoli), and these systems implement an LDAP interface that 3rd party applications can call to validate user credentials.

This allows end users to login to their internal HR portal, SharePoint site, or local Documentum Webtop with the same credentials they used to gain entrance into their Windows Desktop, and is termed SSO (Single Sign-On).  This has dramatically improved the end user experience, as well as improved the ability of IT to mange the risk and policies surrounding identity management.

Continue reading “EMC OnDemand: Federated Identity Management and Silent SSO”

EMC OnDemand: Best Practices for Custom Methods

The concept of custom methods which run directly on the Java Method Server has proven an extremely useful extension point for Documentum developers and solutions architects.  Whether used in a workflow activity to integrate with an enterprise message queue or as an action for Webtop users who need temporarily escalated privileges to apply legal retention, custom Java methods have become a key customization in most customer environments. Features include:

  • Lightweight invocation of methods as compared to dmbasic and external Java methods that require execution
  • DFC operations execute on the same host as the Content Server which minimizes the effects of network latency and throughput
  • Can be configured to run as the repository owner which allows them elevated privileges to content when necessary
  • Provide the logic for workflow auto-activities, able to utilize any Java library including the DFC
  • Provide the logic for custom job/methods, again able to utilize the full power of Java and its libraries

Continue reading “EMC OnDemand: Best Practices for Custom Methods”

EMC OnDemand: Enabling Distributed Content Features and BOCS

Content delivery is one of the primary use cases for a Content Mangement system.  When users are spread across six different continents, you must have an implementation that ensures timely access for all users – not just those in the local network.  A typical scenario involves the database and primary Content Server deployed in the main North American or European datacenter with remote user groups scattered throughout the world.  These remote offices often have limited network throughput, which makes it even more challenging.

Enter Branch Office Caching Services

Documentum has dealt with this scenario since its inception and has a myriad of options for streamlining delivery to users in geographically distributed locations or different departments, among them: remote content servers with distributed storage areas, federations with replication, and Branch Office Caching Services (BOCS).  When we, as OnDemand Architects, looked at our customer needs and use cases, it became apparent that BOCS would be instrumental in providing remote users the experience they expected – which essentially boils down to application and content access on par with a local deployment.

Working with our customers in the real world, we have seen that web application access for remote users (whether via Webtop, D2, or xCP 2.0) is not signficantly impaired by the incremental increase in latency to return HTML/JS/CSS.  The primary factor in application response and users’ perception of performance was the time it takes to transfer content during import, export, and checkin/checkout operations.

Continue reading “EMC OnDemand: Enabling Distributed Content Features and BOCS”

EMC OnDemand: OnDemand versus Amazon EC2

As you can imagine, potential customers have a lot of very legitimate questions when considering the move to EMC OnDemand.  For both new customers as well as those who are migrating their existing content into the EMC secure private cloud one of the questions we hear a lot is, “Why would I choose EMC OnDemand instead of Amazon EC2?”.

I love this question.  It gives us a chance to talk about all the EMC OnDemand value-add without the appearance of grandstanding.  And in the end, it is clear to everyone this is an apples to oranges question, but the explanation allows us to highlight some key points that resonate very deeply with an EMC customer evaluating cloud offerings.

Continue reading “EMC OnDemand: OnDemand versus Amazon EC2”

Documentum: Enterprise use of the DFS Data Model

The Documentum Foundation Services (DFS) introduced developers to the ‘DFS Data Model’, a rich object model that is capable of representing complex repository objects and relationships during interactions with content services.    For those with a DFC programming background, it can be a challenge to shift into the DFS paradigm which focuses on service oriented calls and relies on the data model to fully describe the requested transformations.

Based on my contact with customers through formal Service Requests as well as the EMC Support Forums, I see that many architects, when presented with this unfamiliar landscape instantly assume that the best course of action is to design a custom model to shield other developers from the perceived complexity of the DFS data model.  Although well intentioned, I believe this initial reaction to change can have serious implications that are not often considered or understood at the time of their implementation.

While I believe that abstracting the construction of the DFS data model carries a great deal of value, I believe that replacing the DFS data model with a custom model should be done only with deliberate purpose and awareness.   I will use this article to explore the motivations behind the development of these “simplified” models, their ramifications in a long-term SOA strategy, and how you can deliver convenience without making integration unnecessarily difficult or hindering the building-block nature of SOA.

Continue reading “Documentum: Enterprise use of the DFS Data Model”