A vital piece of information that often goes overlooked is the load created by standard and custom methods run on the Java Method Server. In some applications (such as D2), the JMS is used extensively for application functionality and this can have performance implications to your end users.
You can capture this information by enabling the JMS access log, which is not enabled by default. Continue reading “Documentum: JMS Access Logs to Analyze Custom Method Load”
In the world of microservices and containers, it is often desirable to keep settings such as those found in dfc.properties outside of the jar or war so that the deployment binary is the same no matter which environment it is deployed into.
The settings in dfc.properties can be externalized by specifying the location of dfc.properties in a JVM system property such as:
Troubleshooting LDAPSync issues are often much easier at the command line where you can do an immediate invocation of the job without having to continually refresh DA and wait for the job to be executed. Continue reading “Documentum: LDAPSync from the Command Line”
The most common way of integrating your existing Identity Management system with Documentum is to offer SSO (Single Sign-On) via the LDAP Synchronization job.
This requires that you set a Base DN for Documentum to search through, but it is not uncommon when dealing with real-world LDAP servers to have LDAP referrals in that search space. This is transparent, but it can cause performance issues, and even cause the job to timeout if the forwarded DNS name is not resolvable from the Content Server host.
Continue reading “Documentum: Ignoring Referrals from the LDAP Synch Job”
Content delivery is one of the primary use cases for a Content Mangement system. When users are spread across six different continents, you must have an implementation that ensures timely access for all users – not just those in the local network. A typical scenario involves the database and primary Content Server deployed in the main North American or European datacenter with remote user groups scattered throughout the world. These remote offices often have limited network throughput, which makes it even more challenging.
Enter Branch Office Caching Services
Documentum has dealt with this scenario since its inception and has a myriad of options for streamlining delivery to users in geographically distributed locations or different departments, among them: remote content servers with distributed storage areas, federations with replication, and Branch Office Caching Services (BOCS). When we, as OnDemand Architects, looked at our customer needs and use cases, it became apparent that BOCS would be instrumental in providing remote users the experience they expected – which essentially boils down to application and content access on par with a local deployment.
Working with our customers in the real world, we have seen that web application access for remote users (whether via Webtop, D2, or xCP 2.0) is not signficantly impaired by the incremental increase in latency to return HTML/JS/CSS. The primary factor in application response and users’ perception of performance was the time it takes to transfer content during import, export, and checkin/checkout operations.
Continue reading “EMC OnDemand: Enabling Distributed Content Features and BOCS”
The Documentum Foundation Services (DFS) introduced developers to the ‘DFS Data Model’, a rich object model that is capable of representing complex repository objects and relationships during interactions with content services. For those with a DFC programming background, it can be a challenge to shift into the DFS paradigm which focuses on service oriented calls and relies on the data model to fully describe the requested transformations.
Based on my contact with customers through formal Service Requests as well as the EMC Support Forums, I see that many architects, when presented with this unfamiliar landscape instantly assume that the best course of action is to design a custom model to shield other developers from the perceived complexity of the DFS data model. Although well intentioned, I believe this initial reaction to change can have serious implications that are not often considered or understood at the time of their implementation.
While I believe that abstracting the construction of the DFS data model carries a great deal of value, I believe that replacing the DFS data model with a custom model should be done only with deliberate purpose and awareness. I will use this article to explore the motivations behind the development of these “simplified” models, their ramifications in a long-term SOA strategy, and how you can deliver convenience without making integration unnecessarily difficult or hindering the building-block nature of SOA.
Continue reading “Documentum: Enterprise use of the DFS Data Model”