Organizations in the business of delivering software products can often benefit from developing an application platform. An application platform has the potential to enable significant reuse of business logic across multiple products, it can provide an innovation sandbox for both internal and external developer communities, and it can enable the rapid addition of new features into existing products. However, without a clear definition of the business goals, the right architecture strategy and an effective organizational structure, efforts to create an application platform may result in little more than a technology transition, and can negatively impact software performance or diminish interoperability.
The software industry does not have a common concept of an application platform, so there is no consensus interpretation on which to rely, and application platforms are often confused with other reuse concepts, such as software frameworks and toolkits. There are a number of software “infrastructure” companies that market their products as “enterprise application platforms”, but when an infrastructure company creates a platform, they are typically providing a combination of frameworks, toolkits and run-time environments — Oracle and RedHat are examples. However, when an application company, such as Facebook or Amazon, creates an application platform, they are usually concerned with enabling reuse of common business logic across their application portfolio, such as Facebook’s social networking applications and Amazon’s shopping applications.
This article addresses the platform concerns that are relevant to an application company. The topics discussed below are not intended to provide a definitive recipe for creating an application platform, but rather to articulate common challenges that organizations are likely to encounter, and architecture strategies and organizational tradeoffs they may choose to consider.
The sections below outline technical efforts that a platform “organization” may need to undertake to achieve success. For the purposes of this article, a platform organization may be anything from an entire company, wishing to develop an enterprise-wide platform, down to a single domain team, seeking to improve their productivity. The strategies discussed below must be tailored for the business goals and scope of each organization.
Application Platform Goals
“You’ve got to be very careful if you don’t know where you’re going, because you might not get there.” –Yogi Berra
There isn’t a single motivation for creating an application platform, so different architecture strategies may have varying appeal, depending on the business goals. Two common motivations for an application platform are described below, but even within these two categories, different value may be placed on different objectives. Before making significant architecture decisions and investments, organizations would be well served by spending time characterizing their interests in the application platform concept, and the organizational tradeoffs that may be necessary to achieve their goals.
Building an External Developer Community
Some businesses, like Twitter, Facebook and Amazon, are able to bring additional users to their applications by creating a platform for external developers, which is typically utilized through an externally facing API. These businesses benefit by enabling external developers to find innovative ways to bring their products to new user communities. The external API may be a subset of the internally facing API, but for reasons such as performance or security, it may be entirely separate.
For this purpose, an application platform is a means to expose business logic in ways that enable unique composition models to promote innovation. Businesses that are primarily interested in developing a platform to support external developers will likely place a high value on architecture strategies that enable interoperability with disparate client programming languages, platforms and technologies. There will also likely be requirements to prevent abuse of the external API through security, rate limiting and administrative control mechanisms.
Improving Internal Productivity
For any business that exists within a competitive marketplace, survival is inextricably linked to the ability to bring new products to market quickly, or to rapidly add new features into existing products. In keeping with lessons learned from the Mythical Man Month, “men and months are not interchangeable,” so development schedules cannot be infinitely compressed by stacking more engineers into cubes, and even if that were possible, it wouldn’t be cost effective. Businesses also face tough choices about where to invest their limited resources to maximize business discriminators, so they cannot further constrain those resources by developing the same features in multiple products. Instead, businesses must build products faster, with fewer engineers, and with less redundancy in feature development.
For this purpose, an application platform is intended to expose business logic for internal reuse. When the objectives are internally focused, organizations may have more flexibility in selecting architecture strategies that enable more extensive reuse. Under the previous scenario, the organization owns the single business logic implementation and exposes access to external communities. Under this scenario, the reuse strategy could include re-hosting the common implementation itself into multiple applications using a standard component framework. Issues such as interoperability across languages and technologies may not be as much of a concern because of the ability to mitigate technology diversity within the organization, although flexibility in making technology decisions will still likely be valued.
When the business motivations are primarily concerned with productivity and time-to-market, organizations typically have additional technical objectives, such as developing repeatable design patterns and creating composable presentation widget libraries, but these are in addition to, or in support of, the primary objective of reusing business logic.
Although the architecture quality attributes may be valued differently, the common theme for both of these business motivations is to reuse business logic. The choice of architecture strategies is largely dependent upon whether the goal is to expose business logic for external developer communities, over which businesses have little control, or to facilitate reuse of business logic for internal developers, which may open up a broader set of options.
Scoping the Application Platform Strategy
“If you know neither the enemy nor yourself, you will succumb in every battle.” —Sun Tzu
The software industry is riddled with overloaded terms – just ask any “Architect” to define their role and the diversity of opinions will become obvious. Similarly, there are widely varying interpretations of the application platform concept, but any organization attempting to create an application platform should at least define what it means to them, and they should know how they measure its efficacy against their business goals. The business goals should be articulated in terms of the value to the business (e.g. 10% reduction in development days per feature), and the technical scope of the application platform should be defined so that the relationship to the business goals is clear. To ensure that application platform efforts remain focused, metrics and collection methods should be designed to track the progress of the platform strategies towards achieving the business goals.
The section above identified the reuse of business logic as a key objective of an application platform. The sections below describe the technical scope of several activities that are commonly associated with application platform strategies to support the reuse of business logic and to facilitate productivity improvements. There are many other technical issues that organizations must address to enable enterprise software development, such as security protocols, system management and logging. However, these topics are not unique to application platform efforts, so they are not addressed here. This is also not an article on the merits of different SOA methodologies (e.g. REST vs. SOAP) and industry standards, which are also subjects that organizations must address when they pursue a SOA strategy.
Defining the Architecture
Most organizations choose to pursue an application platform as a strategy for exposing and reusing business logic, which is based on the premise that many of a business’ products can be built by reusing the implementation of common application features (e.g. user registration, geo-filtering, map integration, etc). Figure 1 notionally illustrates this concept, where applications are integrated by reusing business logic deployed as services, which are logically exposed as part of the application platform. The figure includes (simplistic) depictions of internally-developed applications composed of presentation logic and reusable business logic, and 3rd party applications utilizing the exposed business logic, which are outside of the organization’s control.
An application platform is unlikely to be successful without a good understanding of the basic feature building blocks that constitute the business’ products, and an understanding of how features need to vary across products (commonly known as variation points). This is typically the purview of Product Managers (aka Systems Engineers), and is often a deficiency in many organizations because of the unique combination of technical and social skills and market awareness required to be effective in these roles. If Product Managers cannot identify significant commonality across products, and the variation points within the common Use Cases and User Stories, then software architects are unlikely to be able to design an effective application platform. It is also worth noting that, although understanding the platform requirements is an essential step, it is equally important that the information be effectively communicated to engineering teams. Organizations that want to benefit from an application platform must invest in a product management team that cannot only distill the right business information, but also has the tools and skills to communicate that information to their architecture and engineering counterparts.
Once requirements are understood and architecture quality attributes (extensibility, security, availability, usability, etc.) are evaluated, architects can decompose the software architecture into its constituent elements – e.g. services, components, databases, middleware, etc. – and define the supporting deployment configuration. The importance of this process in achieving the application platform goals cannot be overstated. The decomposition of the architecture, along with the definition of the interfaces, establishes the model for reuse. Therefore it is essential that organizations creating an application platform invest in an architecture staff that has experience employing architecture strategies that enable reuse and that satisfy the business’ quality priorities. Architects must also have the ability to set expectations with their product management counterparts for the types of requirements needed to address extensibility aspects of the architecture design.
Software frameworks provide their own reuse model, but the reuse is of generic code, not business logic, which is inherently specialized to a business domain. In addition to providing for reuse of generic code, frameworks often provide a development and deployment model for building component-based business logic on top of the generic code provided by the framework. So, not only do frameworks improve productivity, within the business tier of a software architecture they also provide the foundation for component-level reuse. If an organization chooses to adopt a component-level reuse strategy, they will almost certainly need to adopt a common (3rd party) framework technology.
Even if the goal is not to reuse business logic at the component level, organizations creating an application platform would be well served by including an effort to “templatize” development of certain layers of the software architecture. For example, an organization may want to utilize a presentation framework to allow developers to reliably and repeatedly create standard web pages with the same look and feel and with consistent behaviors and performance. Similarly, there is significant productivity value in creating conventions for developing business logic (e.g. components and services) to increase the efficiency of software engineers and to raise the quality of the software products through standardization. Technology frameworks, tooling and established design conventions can all support these productivity and quality goals, so an organization may want to consider them to be part of an application platform strategy.
Business Logic Frameworks – Architecture strategies, like service orientation, require significant commitment to languages, standards and technologies to implement solutions for object marshaling, exposing endpoints, lifecycle management and cross-cutting concerns (e.g. security, logging). In addition to service orientation concerns, component-level reuse of business logic is only possible if organizations adopt a common component technology (e.g. Spring Beans, Enterprise Java Beans, .NET). Most component-based business logic frameworks now support extensions for Web Services standards and other service orientation implementation concerns, so the two architecture strategies can typically be addressed with a single framework technology if so desired. Once a framework technology is selected, many organizations will need to perform some in-house tailoring to integrate cross-cutting concerns (e.g. security, logging) to accommodate the business’ operational constraints.
Presentation Frameworks – Although not a business logic concern, organizations can enable teams to rapidly create user experience products by employing presentation technology frameworks (e.g. SEAM, Spring Web Flow), and by creating reusable presentation widget libraries built on top of those frameworks.
Different Use Cases will dictate different integration patterns between software architecture elements, such as asynchronous messaging (e.g. Publish-Subscribe, Point-to-Point) or synchronous RPC-type interactions. To eliminate reuse barriers, organizations adopting an application platform will likely want to identify specific solution templates for specific integration patterns as part of a platform execution strategy. For example, an organization might standardize around JMS as the messaging API, select a specific broker implementation that meets performance requirements, establish a broker deployment configuration that meets quality of service (Qos) requirements and create design conventions for dividing message traffic into queues and topics around QoS and performance constraints. In addition to providing a common integration substrate for reuse purposes, standardization efforts such as these provide an architecture and development template that all engineering teams can follow, which reduces development time and improves software quality across teams.
In addition to selecting integration patterns and the supporting standards and technologies, most organizations choose to standardize some of the semantics and conventions of the service interfaces in their platform. As an example, consider the consequences if each of Twitter’s publicly exposed REST resources used different representations for user ids, screen names and status updates. Their platform API would probably be the subject of derision due to the additional burden placed on the developer community to reconcile the inconsistencies. The same would be true if their APIs randomly switched between REST with JSON payload formats and SOAP with XML formats. It is not in Twitter’s business interests to allow such diversity to emerge in the name of agility. Successful platform efforts will include some standardization of the interface object models and design conventions to reduce the burden placed on service consumers.
Organizations may choose to dictate integration technologies and prescriptive patterns across the board, or they may choose to allow teams autonomy within their own applications or domains and only mandate enterprise level (i.e. cross-domain or cross-technology) integration solutions. However, as more technical diversity arises within teams and products, some reuse opportunities may be lost because of the integration challenges. The significance of this risk will be unique to each organization.
To ensure that development teams are achieving reuse objectives, organizations may want to standardize an approach to orchestrating business processes. Separating the implementation of business process flows from business logic is necessary to preserve composability and to maximize reuse of business logic across multiple flows. When workflow orchestration is intertwined with business logic, specific Use Cases become hard coded into the business logic, making it difficult to compose and reuse services for new Use Cases. There is also value in keeping complex orchestration out of the presentation tier so that different application development teams can reuse common flows, which means that orchestration services (aka application services) are typically deployed at the top of the architecture’s business logic tier. Orchestration services are sometimes known as application services because application teams use them directly to integrate complex business processes into applications.
Some organizations will want to carefully control the introduction of languages, standards, technologies and design conventions into the development and deployment environment to facilitate reuse as part of an application platform strategy, while others may prefer to promote creativity and innovation by letting a thousand flowers bloom. Every organization has to decide the level of oversight they will exert based on their objectives and values. However, it should be acknowledged that significant heterogeneity in an organization’s technology stacks can cause interoperability and compatibility challenges, making reuse difficult. It can also make it difficult to migrate engineering staff between projects, and it can create an unmanageable number of dependencies on the lifecycles of 3rd-party products and SDKs. If heterogeneous framework products need to coexist on a single deployment platform, each new version upgrade of each product can have consequences on the other products, which can limit an organization’s flexibility and their ability to rapidly evolve their systems. So, if left unmanaged, the proliferation of disparate technologies can have the opposite effect from what decentralized organizations intended. If decentralized technical decision-making is important to an organization, then the software architectures and deployment configurations must be designed to minimize dependencies and potential conflicts between products developed by autonomous teams.
Allowing small teams to move rapidly by decentralizing technical decision-making is an appealing organizational model. However, organizations often find that, rather than innovating, these autonomous teams spend considerable time solving the same technical challenges already addressed by other teams (or other companies), and they can be susceptible to a “not invented here” mentality, preferring to build rather than reuse. Enabling rapid product development often requires a more nuanced organizational solution, including some prescriptive technologies and design patterns, an effort to disseminate knowledge about reuse opportunities across engineering teams and a streamlined process to enable deviation for truly unique technical challenges. Most of all, it requires architects and engineering managers who understand the value of applying repeatable design patterns and common industry solutions as part of a broader platform strategy.
“A camel is a horse designed by a committee.” —Unknown
As mentioned above, many software organizations choose to decentralize their technical decision-making process so that teams close to the product development activities have the freedom to innovate and move rapidly to make choices that are best suited for the circumstances on the ground. This decentralized organizational model can pose some challenges to developing a platform, which is inherently horizontal in that it cuts across multiple products. Some of the challenges follow below.
- When a decentralized organization attempts to create a horizontal solution, the technical decision-making process can degenerate into a consensus-driven democracy across product teams, which is a poor model for creating high quality, innovative software products on aggressive schedules.
- Vertical product teams can be prone to parochialism when they become responsible for building software that is intended to be reusable beyond the products for which they are responsible, because their incentives have historically been driven by product successes, not platform successes.
- The success of a platform is highly dependent upon the correct decomposition and allocation of requirements to the specific layers and software elements of the architecture. In some decentralized models, there may be no independent architect to evaluate requirements and business goals to decompose the software into its constituent parts. If all architects are decentralized to the product teams, then everyone in the organization already represents some particular organizational interests, so the architecture may become biased to reflect the org structure rather than business goals.
The significance of these challenges largely depends on the reach of the platform efforts. In some organizations, collections of related products could be readily aggregated under a few major domain-centric teams, allowing most or all of the platform efforts to be localized to each team. In effect, each team would be building its own platform supporting its own products, perhaps with a few services exposed to the broader enterprise. Under these circumstances, since most of the reuse objectives are targeted at the products managed by each team, there may be no need for a broader organizational solution.
In cases where there is a need to reconcile platform efforts across multiple decentralized teams, one possible approach is to divide teams differently, where some teams are focused on creating applications and other teams are focused on the “platform as an internal product.” Teams that are focused on building true end-user applications would create the user experience (i.e. presentation), create business logic that is unique to the application (i.e. not part of the platform) and integrate platform business logic (e.g. services, components, etc) into the application. Teams responsible for the platform would be disassociated from any specific application ownership and instead work to satisfy requirements that are applicable to multiple application teams, as well as employing architecture strategies to enable reuse and rapid extensibility. It is worth noting that, although this approach would likely produce better results for the platform, it does diminish the appeal of decentralized teams, which are intended to be self contained with minimal external dependencies.
Regardless of the organizational strategy, it is important for businesses to be mindful of Conway’s Law, which is roughly interpreted to mean that a software product will reflect the organization that created it in the way the architecture is modularized and in the quality of the interfaces. With that in mind, if creating an application platform is a high priority, it is probably better to organize teams around the desired architecture structure rather than hope the desired architecture will emerge from the existing organizational structure.
Org structures are the subject of holy wars, and the pros and cons of platform organizational strategies is a topic worthy of a separate article, but a couple of conclusions are abundantly clear. First, any org structure that creates ambiguity about roles and responsibilities, and creates confusion around who has authority to make decisions, is a bad model. Second, every org structure model needs to be tailored for the people that fill its ranks. It is tempting to try to copy an organizational model from a highly successful company, but there are differences in the talent pools available to different companies. Not all companies can be staffed like Google. Such companies have access to a large pool of top talent, so it is difficult to distinguish the role played by the organizational model versus the benefits of “individual heroism” that come with having an abundance of experienced, highly-qualified staff. When organizations have less access to top talent, they may achieve better results from centralizing some critical technical decisions and providing more top-down guidance to implementation teams, possibly at the expense of some agility. Achieving agility at the expense of quality is a self-defeating recipe for becoming overwhelmed by technical debt.
SOA Governance, an extension of traditional architecture and IT governance, is a frequent topic of discussion in the SOA literature. Design-time governance provides a mechanism to ensure consistency in the decomposition of the architecture into its constituent elements (services, components, databases, etc) and oversight of the design and evolution of contracts and interfaces. Run-time governance provides a mechanism for ensuring compliance with enterprise policies and SLAs (security, performance, etc) as new services are deployed into an operational environment, or as new service consumers are added.
Organizations working to enable the reuse of business logic across products and domains will have to institute some degree of architecture governance to ensure that architecture elements are designed and evolved to meet the needs of multiple product teams. This typically requires some prescriptive architecture decisions with respect to the topics discussed above (e.g. layering, control flow models, message exchange patterns, standards and technologies). Organizations that fail to govern the mapping of requirements and other architecture quality attributes to a coherent set of services often end up with a proliferation of fine-grained services, each providing only a parochial interpretation of a narrow set of requirements, which defeats the purpose of service orientation for broad reuse.
A prerequisite of architecture governance is an agreement on the common platform Use Cases and the way the implementation will be exposed to consumers for reuse. For example, if a business wants to develop a shopping cart service for reuse across multiple products, each product team must accept the common shopping cart workflow and the separation of features into architecture layers and elements. Although it sounds easy, this can be a bitter pill to swallow for teams wanting to retain full autonomy over the interpretation of their requirements.
Inherent within the governance topic is the need for a discovery solution. Large, distributed, and potentially decentralized, organizations need a way to communicate a great deal of information about the software asset inventory and its lifecycle so that teams understand what software assets are available for reuse, to retrieve contract and interface specifications and to acquire the specific service endpoints for integration with the consumer’s application. Organizations whose primary motivation is internal reuse will have to decide how much effort they will invest in creating and maintaining enough documentation to enable completely self-service reuse, versus just enough public documentation to put internal application teams in touch with the internal service provider. The former approach is always the ideal, but the latter approach is often the default solution because of the burden of maintaining documentation.
Automated tools can be used to satisfy many of the policy compliance governance concerns, but for much of design governance, there is no substitute for some degree of architecture oversight. An organization building an application platform will have to make decisions about their governance structure with their eyes open about the relationship between governance and the application platform goals. If related applications are grouped together under a single domain or product team, then it may be sufficient to structure governance within each of these product groups rather than centralizing governance across all teams. This could strike a reasonable balance between reuse and decentralization, where most reuse is expected to be achieved within each decentralized domain team.
Application platforms come with an inherent dependency management risk, where some applications may be slower to market because the development schedule for a required platform service is held up for another, slower-moving application with poorly defined requirements. This dependency risk is significant, but it can be managed with incremental releases and careful governance to ensure backwards compatibility. Regardless of the organizational strategy used to develop the application platform, it should include specific governance mechanisms to address dependency management.
A cottage industry revolves around the notion of software process improvement through measurement. Without measurement, process efforts have a tendency to take on a life of their own, detached from any business value. The same can often be said about technology efforts. Service orientation initiatives frequently result in a new architecture with a great many services, but no new business benefits. To avoid this fate, the business goals must be clear and universally understood, and the enabling technical decisions and tradeoffs should be evaluated based on the degree to which they achieve the business goals.
There is considerable artistry involved in taking a high-level business goal, like reducing the time to market, and decomposing it into measurable objectives, such as increasing the number of SLOC written (or features produced) per development day and reducing the testing and deployment time. Once those objectives are defined, they must be measured, tracked over time and used to tailor the application platform approach. Measurements for objectives like engineering productivity are never perfect. There is always some variance across teams and projects due to differences in the technical scope of the work or the skills of the staff, so it is important that metrics be carefully collected and analyzed over a number of projects to avoid making decisions based on anomalies.
Platform Architecture Strategies
“When one has finished building one’s house, one suddenly realizes that in the process one has learned something that one really needed to know in the worst way – before one began.” —Friedrich Nietzsche
Modularity, encapsulation and decoupling are key architecture tactics for developing reusable, extensible software and increasing developer productivity. If business logic is appropriately modularized, application developers can compose it in unique ways to build new products or to enable the extension of existing products. If business logic is modularized too coarsely, then some business functions will be unreachable and incapable of being used or modified independently (failure to separate concerns). If business logic is modularized too finely, the programming model can become overly complex or performance may suffer because of the verbosity of the software interfaces.
Encapsulation is, in part, a mechanism for hiding complexity from consumers of reusable business logic, which is critical for improving developer productivity. Architects strive to hide the complexity of a module behind its public interface so that consumers are only required to understand how to use a software module, but they are not required to understand how it is implemented.
Decoupling is a means of isolating changes in a system. For large systems, introducing reusable business logic can lead to a significant number of dependencies. If multiple applications depend on one reusable software module, any changes to that module can drive volatility into every consuming application. However, there are architecture strategies and design patterns that can help decouple application consumers from the business logic implementation, which, in many cases, can enable the reusable business logic to evolve at a different rate than the consuming application.
The sections below discuss some of the key aspects of modularity, encapsulation and decoupling with respect to the roles of service orientation and component architectures in creating a reuse strategy for an application platform
Services and Components
Service orientation has received a lot of attention over the last few years, but its purpose is often misunderstood and the architecture concepts are frequently applied too broadly. When organizations discover service orientation, many embrace it as their single architecture strategy for reusable software, only to be disillusioned when goals are not achieved. Service orientation provides specific tactics for modularization, encapsulation and decoupling, and it can be an effective architecture strategy for achieving reuse of business logic, but it should not be viewed as a replacement for traditional component architecture strategies.
There are many differing opinions about the distinctions between services and components, such as coarse versus fine-grained interfaces. However, most of these characteristics do not do much to eliminate ambiguity, since, for example, there is nothing to preclude a coarse-grained component or a fine-grained service (other than good sense!). The most distinct differences have to do with the application developer’s perspective relative to components and services and how they are used at development-time and run-time.
Components are tangible units of code developed using a language-specific framework (e.g. Spring Beans, Enterprise Java Beans, .NET), and they are typically compiled and integrated into an application as a native binary library or file. Component frameworks provide a convention for structuring a component’s interface, and they provide standardized solutions for common enterprise requirements (e.g. persistence, security, management, etc). A consumer of a component typically must be aware of the component framework technology, and the consuming application is usually written in the same language as the component. Therefore a component entails a development model. Alternatively, services are system interface boundaries, but their underlying implementation and deployment architecture is hidden from the consumer by the contract and interface, so a service is strictly a run-time model – it’s there at runtime when the consumer hits its endpoint, but its physical location is irrelevant to the consumer. A developer has to care whether a component is implemented in Java or C# and has to understand the component’s deployment relationship to the application. Conversely, developers do not have to be concerned with a service’s implementation technology or its deployment relationship. It is invoked through an interface over a network.
Since services are invoked over a network, their interfaces must be designed to exchange “chunky” messages to avoid a “chatty” network interaction. Those chunky messages typically violate object-oriented best practices related to cohesion, which is a tradeoff made to mitigate performance penalties. The same tradeoff would not be made for a locally bound component, which would instead be designed with a cohesive object model exchanged across the intra-process interface boundary.
The line between services and components can still be blurred, especially with remote component technologies (e.g. Enterprise Java Beans with RMI), which can be used to achieve some degree of decoupling between the component’s consumer and the component deployment architecture. In the sections below, various characteristics and use cases are explored to help distinguish the appropriate role of services and components in an application platform architecture strategy. Figure 2 illustrates two opposite ends of an architecture spectrum, where an application platform is implemented with a service orientation strategy (a), and implemented with a component architecture strategy (b). With the service-oriented platform approach, a single implementation of the business logic is exposed for reuse by applications, and services are deployed and scaled independently from any specific application. With the component-based approach, copies of components are reused within applications, but then entire applications are integrated and deployed together, and they are scaled through load balancing at the application level.
Service orientation can be an effective strategy for supporting a wide variety of different consumers using different languages and platforms. To eliminate platform and language dependencies, services are deployed independently from their consumers and invoked over a network using technology-agnostic protocols and formats. Web Services standards have formed around HTTP, XML and JSON to meet this interoperability need. Components, on the other hand, are language-specific and they are invoked through native language interfaces. Cross-language component interface technologies, like CORBA and JNI, have not been widely adopted because of a variety of limitations.
There are limits to what interoperability can be achieved with Web Services technologies, which are most suitable for synchronous request-response or request-acknowledge integration patterns. Although there are Web Services standards efforts to accommodate event-driven patterns (WS-Eventing, WS-Notification), the standards landscape is fractured in this area, and none are widely adopted. Other technologies, such as the Java Messaging Service (JMS), are often necessary to support asynchronous, event-driven requirements (inside an enterprise) and the corresponding quality of service constraints. However, using these technologies often comes at the expense of interoperability and the additional complexity of bridging disparate, language-specific solutions.
One of the most significant differences between components and services is the deployment model. In most scenarios, consumers include a binary copy of a reusable component in their own application deployment, which tightly couples the consumer to a specific version of the component implementation. If the component provider changes the implementation, but does not change the interface (e.g. bug fix), the component must still be redeployed to consumers, who must repackage and redeploy their own applications. Conversely, service consumers have no dependency on the service deployment, which is a characteristic often referred to as location transparency. If a service implementation changes without affecting the interface, consumers should be unaware of the change, which means it is incumbent upon the architect to make design choices that minimize the changes that must be propagated through the service interface to the consumers. Careful separation of a service’s interface from its implementation is a key tenet of service oriented design.
There are component architectures that support remote binding, such as Enterprise Java Beans, which can provide location transparency. However, such component architectures require a significant, enterprise-wide commitment to a set of framework technologies (e.g. JEE App Server, JNDI) and a programming model. Historically, remote binding in component architectures has not been very successful, in part because remote invocation requires explicit thinking about the component’s interface granularity and handling of the inherent unreliability of distributed systems. Since component designers are defining interfaces in a native language, they generally create their components with local binding in mind, which means they create chatty interfaces that don’t perform well when invoked remotely. Most architects choose to scale component-based architectures by bundling complete applications together with locally bound components (Figure 2b), and then they load balance by replicating instances of the applications across multiple hardware nodes, as opposed to the alternative of deploying remote pools of components and integrating applications through remote component binding.
Because of the decoupled deployment model, service orientation can be an effective architecture strategy when a business places a high value on change isolation. This decoupling enables the evolution of implementation technologies and lifecycle strategies in different parts of a system without driving volatility into the consumers. However, architects often fail to achieve this objective because they make service design choices that do not emphasize decoupling, such as decisions that tie a service’s internal domain model to its external interface, which couples the clients to the internal design of the services and unnecessarily propagates changes that should be isolated. These types of decoupling abstractions do have an impact on performance (e.g. object transformations), and although often negligible if implemented carefully, an organization that places more value on extreme performance optimization (beware of the Cult of Performance!) over change isolation should probably focus on modularity and reuse through locally bound component architectures rather than through service orientation.
Aside from the decoupling and interoperability considerations discussed above, services can be used effectively to expose complex business functionality across business domain boundaries, or in cases where a business wants to enable the orchestration of complex business processes from services in multiple domains. Under these circumstances, the complexity and scope of the business logic and the orchestration process would make it difficult to deploy components that could be compiled into consuming applications. In short, the model depicted in Figure 2(b) is difficult to sustain when business logic becomes large, complex and spans domains.
Summary – Components vs. Service Orientation
Service orientation and component architectures are not mutually exclusive reuse strategies. They will likely coexist within the same architecture, and, at a minimum, services are often implemented with components, providing an opportunity for additional reuse if the organization has standardized around a component framework. Service orientation is a valuable architecture strategy to achieve interoperability when consumers use heterogeneous technologies. It can also provide a mechanism to isolate the impact of change from consumers, and to abstract away the complexity of business processes. However, service orientation comes at the expense of some latency across the interface boundary and some additional performance load on the service implementation. This is in part due to the remote deployment, in part due to the technology agnostic messaging used to achieve interoperability and the marshaling required to serialize the interface object model, and, to a small degree, due to the layering used in the service design to provide the decoupling of the interface and the implementation. It should be clear at this point that a service is not simply a remote component wrapped with HTTP and XML. That line of thinking is a recipe for failure. Architects must create a comprehensive service-oriented architecture decomposition that minimizes the impact of the performance penalties while maximizing the reuse benefits.
Aside from situations where organizations are highly concerned with optimizing performance, there are still many cases where modularity is desired, but the inherent chattiness of the interaction across the interface boundary lends itself to a locally bound, fine-grained component interface, such as within the development of a service implementation. In short, that means it is unwise to attempt to expose all potentially reusable business logic as services. Service-orientation alone only provides for reuse of major business functions that are amenable to coarse-grained service interfaces with limited back-and-forth interaction across the network boundary. If the organization’s Use Case analysis reveals that significant component-level reuse could also be achieved, then such a strategy may be worth the effort of establishing a more homogeneous technology environment to enable component sharing across products or domains. In general, there should be many more components in a system architecture than there are services, but most of the components will be used within service implementations (Figure 3). Based on a business’ particular Use Cases and goals, the reuse strategy may be strictly at the service level, strictly at the component level (Figure 2b), or a combination of both.
Other Architecture Considerations
In addition to the way business logic is structured to achieve the platform goals, there are other efforts that can be made to improve reuse and increase productivity through consistency in software design, by considering the need for extensibility up front and by recognizing the impact of design complexity on development productivity.
A domain model represents an organization’s common business lexicon, and it heavily influences software design and implementation (Object & Data Models). Domain models provide the foundation for consistency in an organization’s software structures, making software easier to read and understand, which increases developer productivity. Investing in the creation of a domain model provides the basis for common software implementations, but it cannot be adequately defined without a good understanding of product Use Cases through a partnership with product management teams.
Common Object Models – Much has been written about the utility of structuring software so that it reflects the business domain. Creating common object model representations of domain entities and relationships (e.g. user, address, common measurement units, etc) allows developers to reuse object models across applications and can improve performance in some cases by eliminating run-time transformations. Significant commonality in software structures across business logic implementations makes the developer experience more consistent across projects, which can improve productivity.
Common Data – It can be difficult to reuse business logic across applications if the underlying business data is fractured across different data models and databases. Unifying business logic into common services or components will almost certainly entail significant alignment of data models.
Although efforts to develop common implementations of common domain entities can have significant productivity benefits, it is also worth noting that organizations can take this too far by overloading common implementations with many orthogonal application concerns. This often manifests as bloated database table structures or object models with poor cohesion, which can result in tight coupling that pushes the impact of software changes to an unnecessarily wide audience. There is no blanket rule that can substitute for skillful software design.
Part of the art of designing software is to anticipate the different dimensions of potential future change, which are often referred to as variation points. These are the areas where development teams may need the ability to tailor features or behaviors for off-nominal application workflows, regional differences or future scope growth. Architects enable rapid extensibility through strategies for modularity and decoupling that build variation points into the software architecture. Extensibility considerations play a significant role in determining the decomposition of a software architecture into its constituent services and components.
In the pursuit of elegance, architects occasionally lose sight of the cost that design complexity has on the development, test and deployment processes. If an architecture strategy or design pattern brings only minimal benefit at the cost of significant complexity, then it is probably not a good solution. In all that architects do to realize an application platform, making the job of developing software easier should be at the forefront of their minds, and finding ways to abstract away complexity should be a driving goal. In design patterns, tooling, documentation, etc., simplicity should be a key objective.
Conclusions and Recommendations
“Wise men don’t need advice. Fools won’t take it.” —Benjamin Franklin
The previous sections provided an overview of topics related to application platform business motivations, reviewed the potential technical scope of an organization’s platform strategy and discussed architecture approaches for creating reusable business logic and improving productivity. The following is a distilled set of recommendations for organizations that want to create an application platform.
- Businesses must clearly articulate their goals along with the metrics they’ll use to evaluate the success of the application platform in achieving those goals. The goals should be expressed in terms of adding business value, not technology achievements. The metrics measure the efficacy of the platform technology achievements in providing value to the business.
- The desired architecture outcome is more likely to be achieved if the organizational model is biased toward the architecture. However, any potential organizational change must be evaluated with respect to the the business’ overall priorities and the available talent pool.
- Invest in creating a Product Management organization capable of defining platform Use Cases and variation points. Ensure that the staff has the skills needed to communicate to engineers in a parlance they understand.
- Invest in software engineering personnel with the skills needed to correctly employ service orientation and component architectures to enable broad reuse and rapid extensibility without sacrificing other architecture quality attributes.
- Determine the level in the organization where common technical approaches are important to achieving the platform goals, and then enable developer productivity and promote reuse at that level through prescriptive selection of languages, framework technologies and industry standards.
- Define repeatable design conventions for common integration and orchestration requirements.
- Invest in experienced architects and engineering managers who can promote the reuse of successful technical solutions across the organization, and streamline a process to empower fast-moving teams to innovate when necessary to solve unique problems.