In the News. Introducing The Rational Reader. Mike Perrow Editor-in-Chief

Size: px
Start display at page:

Download "In the News. Introducing The Rational Reader. Mike Perrow Editor-in-Chief"

Transcription

1

2 Editor's Notes Rational co-founder and chairman Paul Levy has observed a striking difference between writing software and writing a novel: Put a dozen novelists in the same room, and together they couldn't possibly improve on the individual genius of a William Faulkner or a Virginia Woolf. But when it comes to writing software, it's a different story. Software development is a team sport, and collectively, a wellorganized team of developers can create ingenious software systems that actually add up to more than the sum of their individual talents. In this month's cover story, software methodologist Walker Royce examines how software development teams can realize their full potential through effective management and through improved automation techniques. As part of his continuing series on "Improving the Economics of Software Development," this installment examines how "doing more with less" is achievable when software project teams are clearly focused on results. Speaking of Virginia Woolf, Dawn Haynes is decidedly unafraid. She takes a close look at automated testing and considers whether or not it provides the "silver bullet" solution to the time- and resource- and people-intensive efforts associated with testing software. (Hint: In this movie, the werewolf is kept at bay, but it takes more than "pure magic" to save the village.) Jim Heumann explores another testing territory, showing how use cases can be used to generate test cases quite early in the development process. And there's lots more. Java application profiling with Rational Purify. The OMG's new initiative regarding Model Driven Architecture. Plus Jim Ressler from Raytheon offers a "Strategy for Managing Multiple Baselines with Rational ClearCase," and Joe Marasco gives us three slices of life in this month's Franklin's Kite. Send more ! Especially if you have specific questions or concerns about things you read here. (Last month, we received some very nice messages about the quality of this publication, which, of course, I forwarded to the boss. But please feel free to use The Rational Edge as a forum for your hard-hitting questions, too.)

3 Mike Perrow Editor-in-Chief P.S. Read any good books lately? Send us a review for the new Rational Reader section! In the News Rational User Conference (RUC) is here again! Register online today! Introducing The Rational Reader Read Joe Marasco's Introduction. Copyright Rational Software 2001 Privacy/Legal Information

4 Improving Software Development Economics Part III: Improving Team Proficiency and Improving Automation by Walker Royce Vice President and General Manager Strategic Services Rational Software This is the third installment of a four-part series of articles that summarize our experience and discuss the key approaches that have enabled our customers to make substantial improvements in software development economics. In the first article, I introduced a framework for reasoning about economic impacts and introduced four approaches to improvement: 1. Reduce the size or complexity of what needs to be developed. 2. Improve the development process. 3. Use more proficient teams. 4. Use integrated tools that exploit more automation. In last month s article, I discussed the first two approaches. This month, I will discuss some of the discriminating techniques involved with the last

5 two approaches: using more proficient teams and exploiting more automation. Improving Team Proficiency Getting more done with fewer people is the paramount underlying need for improving software economics. The demand for competent software professionals continues to outpace the supply of qualified individuals. In almost every successful software development project and software organization that Rational encounters, there is a strong commitment to configuring the smallest, most capable team. However, most troubled projects are staffed with more people than they require. "Obese" projects usually occur because the project culture is more focused on following a process rather than achieving results. In the previous article covering process improvement, and in the Rational Unified Process, there is a continuous, lifecycle emphasis on achieving results. This is a subtle but paramount differentiator between successful, results-driven, iterative development projects and unsuccessful process-driven projects. So how can organizations use smaller, more capable teams? Rational has identified three different levels that need to be addressed: enhancing individual performance, improving project teamwork, and advancing organizational capability. Enhancing Individual Performance Organizations that analyze how to improve their employees' proficiency generally focus on only one dimension: training. Although training is an important mechanism for enhancing individual skills, team composition and experience are equally important dimensions that should be considered. Balance and coverage are two important characteristics of excellent teams. Balance requires leaders and followers, visionaries and crankturners, optimists and pessimists, conservatives and risk takers. Whenever a team is out of balance, it is vulnerable. Software development is a team sport. A team loaded with superstars, each striving to set individual records and be the team leader, can be embarrassed by a balanced team of solid players with a few leaders focused on the team result of winning the game. Managers must nurture a culture of teamwork and results rather than individual accomplishment. The other important characteristic, coverage, requires a complement of skill sets that span the breadth of the methods, tools, and technologies. Two dimensions of experience necessary to achieve sustained process improvements are equally important: software development process maturity and domain knowledge. Unprecedented systems are much riskier endeavors than systems that have been built before. Experience in building similar systems is one of the paramount qualities needed by a team. This precedent experience is the foundation for differentiating the 20 percent of the stuff that is architecturally significant in a new system. A mature organization that builds real-time command and control systems will not be capable of exhibiting its usual mature performance if it takes on

6 a new application domain such as e-business Web site development. Improving Project Teamwork Although it is difficult to make sweeping generalizations about project organizations, some recurring patterns in successful projects suggest that a core organization should include four distinct subteams: management, architecture, development, and assessment. The project management team is an active participant, responsible for producing as well as managing. Project management is not a spectator sport. The architecture team is responsible for design artifacts and for the integration of components. The development team owns the component construction and maintenance activities. The assessment team is separate from development, to foster an independent quality perspective as well as to focus on testability and product evaluation activities concurrent with ongoing development throughout the lifecycle. There is no separate quality team because quality is everyone s job, integrated into all activities and checkpoints. However, each team takes responsibility for a different quality perspective. Some proven practices for building good software architectures are equally valid for building good software organizations. The organization of any project represents the architecture of the team and needs to evolve in synch with the project plans. Defining an explicit architecture team with ownership of architectural issues and integration concerns can provide simpler and less error-prone communications among project teams. Figure 1 illustrates how project team staffing and the organizational center of gravity evolves over the lifecycle of a software development project. Inception: A management team focus on planning, with enough support from other teams to ensure that the plans represent a consensus of all perspectives. Elaboration: An architecture team focus, where the driving forces of the project reside in the software architecture team and are supported by the software development and software assessment teams as necessary to achieve a stable architecture baseline. Construction: A development team focus, where most of the activity resides in the software development and software assessment teams. Transition: A customer-focused organization, where usage feedback is driving the organization and activities.

7 Figure 1: Team Evolution Over the Software Lifecycle Teamwork is much more important than the sum of individual skills and efforts. Project managers need to configure balanced teams with a foundation of solid talent and put highly skilled people in the high-leverage positions. These are some project team management maxims: A well-managed project can succeed with nominal engineering talent. An expert team of engineers will almost never succeed if a project is mismanaged. A well-architected system can be built by a nominally talented team of software builders. A poorly-architected system will flounder even with an expert team of builders. Advancing Organizational Capability Organizational capability is best measured by trends in project performance rather than by key process area checklists, process audits, and so forth. Figure 2 provides some simple graphs of project performance over time to illustrate the expectation for four different levels of organizational capability. 1. Random: Immature organizations use ad hoc processes, methods,

8 and tools on each new project. This results in random performance that is frequently unacceptable. Probably 60 percent of the industry s software organizations still operate with random, unpredictable performance. 2. Repeatable: Organizations that are more mature use foundation capabilities roughly traceable to industry best practices. They can achieve repeatable performance with some relatively constant return on investments in processes, methods, training, and tools. In our experience, about 30 percent of the industry s software development organizations have achieved repeatable project performance. 3. Improving: The industry s better software organizations achieve common process frameworks, methods, training, and tools across an organization within a common line of business. Consistent, objective metrics can be used across projects, which can result in an improving return on investment from project to project. This is the underlying goal of ISO 9000 or SEI CMM process improvement initiatives, although most such initiatives tend to take process- and activity-focused perspectives rather than project-result-focused perspectives. At most, 10 percent of the industry s software development organizations operate today at this level of capability. 4. Market leading: Organizations achieve excellent capability, which should align with market leadership, when they have executed multiple projects under a common framework with successively better performance; have achieved an objective experience base from which they can optimize business performance across multiple performance dimensions (trading off quality, time to market, and costs); and practice quantitative process management.

9 Figure 2: Organizational Capability Improvement Measured Through Successive Project Performance In any engineering venture, where intellectual property is the real product, the dominant productivity factors will be personnel skills, teamwork, and motivations. To the extent possible, a modern process encapsulates the requirements for high-leverage people in the early phases, when the team is relatively small. The later production phases, when teams are typically much larger, should then operate with far less dependency on scarce expertise. Improving Automation Through Integrated Tools In last month's article, I described process improvements associated with transitioning to iterative development. These improvements are focused on eliminating steps and minimizing the scrap and rework inherent in the conventional process. Another form of process improvement is to improve the efficiency of certain steps by improving automation through integrated tools. Today s software development environments, combined with rigorous engineering languages like UML, enable many tasks that were previously manual to be automated. Activities such as design analysis, data translations, quality checks, and other tasks involving a deterministic production of artifacts can now be done with minimal human intervention. Environments should include tools for requirements management, visual modeling, document automation, host/target programming tools, automated regression testing, integrated change management, and feature/defect tracking. Today, most software organizations are facing the need to integrate their own environment and infrastructure for software development. This typically results in the selection of more or less incompatible tools with different information repositories, from different vendors, on different platforms, using different jargon, and based on different process assumptions. Integrating and maintaining such an infrastructure has proved to be much more problematic than expected. An important

10 emphasis of a modern approach is to define an integrated development and maintenance environment as a first-class artifact of the process. Commercial processes, methods, and tools have synthesized and packaged industry best practices into mature approaches applicable across the spectrum of software development domains. The return on investment in these commercial environments scales up significantly with the size of the software development organization, promotes useful levels of standardization, and minimizes the additional organizational burden of maintaining proprietary alternatives. Improving Human Productivity Planning tools, requirements management tools, visual modeling tools, compilers, editors, debuggers, quality assurance analysis tools, test tools, and user interfaces provide crucial automation support for evolving the intermediate products of a software engineering effort. Moreover, configuration management environments provide the foundation for executing and instrumenting the process. Viewed in isolation, tools and automation generally yield 20 to 40 percent improvements in effort. These same tools and environments, however, are also primary vehicles for reducing complexity and improving process automation, so their impact can be much greater. Tool automation can help reduce the overall complexity in automated code generation from UML design models, for example. Designers working at a relatively high level of abstraction in UML may compose a model that includes graphical icons, relationships, and attributes in a few diagrams. Visual modeling tools can capture the diagrams in a persistent representation and automate the creation of a large number of source code statements in a desired programming language. Hundreds of lines of source code are typically generated from tens of human-generated visual modeling elements. This 10-to-1 reduction in the amount of humangenerated stuff is one dimension of complexity reduction enabled by visual modeling notations and tools. Eliminating Error Sources Each phase of development produces a certain amount of precision in the product/system description called software artifacts. Lifecycle software artifacts are organized into five sets that are roughly partitioned by the underlying language of: 1. Requirements (organized text and UML models of the problem space) 2. Design (UML models of the solution space) 3. Implementation (human-readable programming language and associated source files)

11 4. Deployment (machine-processable languages and associated files) 5. Management (ad hoc textual formats such as plans, schedules, metrics, and spreadsheets) At any point in the lifecycle, the different artifact sets should be in balance, at compatible detail levels, and traceable to each other. As development proceeds, each part evolves in more detail. When the system is complete, all five sets are fully elaborated and consistent with each other. As the industry has moved into maintaining different information repositories for the engineering artifacts, we now need automation support to ensure efficient and error-free transition of data from one artifact to another. Round-trip engineering describes the environment support needed to change an artifact freely and have other artifacts automatically changed so that consistency is maintained among the entire set of requirements, design, implementation, and deployment artifacts. Enabling Process Improvements Real-world project experience has shown that a highly integrated environment is necessary both to facilitate and to enforce management control of the process. An environment that captures artifacts in rigorous engineering languages such as UML and programming languages can provide semantic integration (where the environment understands the detailed meaning of the development artifacts) and significant process automation to improve productivity and software quality. An environment that supports incremental compilation, automated system builds, and integrated regression testing can provide rapid turnaround for iterative development, allow development teams to iterate more freely, and accelerate the adoption of modern techniques. Objective measures are required for assessing the quality of a software product and the progress of the work, which provide different perspectives of a software effort. Architects are more concerned with quality indicators; managers are usually more concerned with progress indicators. The success of any software process whose metrics are collected manually will be limited. The most important software metrics are simple, objective measures of how various perspectives of the product/project are changing. Absolute measures are usually much less important than relative changes with respect to time. The incredibly dynamic nature of software projects requires that these measures be available at any time, tailorable to various subsets of the evolving product (subsystem, release, version, component, team), and maintained such that trends can be assessed (first and second derivatives). Such continuous availability has only been achieved in development/integration environments that maintain the metrics online, as an automated by-product. In next month s issue of The Rational Edge, I will conclude this series with a summary of how to balance priorities and a few lessons we ve learned

12 about eliminating sources of friction associated with organizational changes targeted at improving software economics. For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you! Copyright Rational Software 2001 Privacy/Legal Information

13 Model Driven Architecture Targets Middleware Interoperability Challenges by Richard Soley Chairman and Chief Executive Officer Object Management Group and the OMG Staff Strategy Group "CORBA was a powerful first step, but we have more steps to take." -Fred Waskiewicz, OMG Director of Standards Since its inception, the Object Management Group (OMG) has provided vendor- and languageindependent interoperability standards to the enterprise. The CORBA (Common Object Request Broker Architecture) standard has recently been modified for -- and embraced by -- environments that require specialized real-time, fault-tolerant, and embedded systems. The OMG's complementary core modeling specifications include the Unified Modeling Language (UML), the Common Warehouse Meta-model (CWM), the Meta-Object Facility (MOF), and XML Metadata Interchange (XMI). Now, the OMG is building on the success of CORBA and the UML to take aim at the problems arising from the proliferation of enterprise middleware. Its Model Driven Architecture (MDA) initiative champions the extensibility and reliability of CORBA while acknowledging that enterprises cannot simply abandon their investment in other technologies. (More on MDA can be found at Rational Software has been particularly active, along with many other OMG member companies, in contributing ideas and principles toward the creation of MDA. If you are a Rational customer interested in modeling or software development infrastructure, I think you'll be interested in the OMG's concepts, goals, and plans for MDA, as described in the following article.

14 Over the past decade or so, the middleware landscape has continually shifted. For years we've assumed that a clear winner would emerge and stabilize this state of flux, but the time has come to admit openly what many of us have suspected all along: The string of emerging contenders will never end! And, despite the advantages (sometimes real, sometimes imagined) of the latest middleware platform, migration is almost always expensive and disruptive. OMG's layered services and vertical market specifications are built on CORBA -- which we regard as the optimum middleware -- and strongly established through the OMG community process. We do recognize, however, that enterprises often have applications on other middleware that simply have to be integrated into new or modified systems, even though this process is time-consuming and expensive. Furthermore, the middleware these enterprises use continues to evolve. And to make matters even more complicated, before the Internet evolved into an enterprise marketplace, organizations often used different technologies for communication within and beyond their firewall. Now, some businesses want to expose components they built for internal communication out beyond the firewall -- for business-to-business e- commerce, for example. Others want to move components off their extranets and place them behind their firewalls because of an acquisition or merger. So in addition to resolving basic integration problems, IT organizations must find a way to preserve their development investment in new components as enterprise boundaries shift -- and the underlying technologies change. Addressing the Problem: Model Driven Architecture Fortunately, there is a way to manage this situation. Building on OMG's core modeling standards, we created a Model Driven Architecture (MDA) that is language-, vendor- and middleware-neutral. As Figure 1 shows, the core of this architecture is based on the UML, the MOF and CWM. Multiple core models 1 are currently under development: One will represent enterprise computing with its component structure and transactional interaction; another will represent real-time computing with its special needs for resource control; more will be added to represent other specialized environments. Each core model will be independent of any middleware platform. The total number, however, will be small, because each core model will represent the common features of all the platforms in its category. 2 Whether your ultimate target is the CORBA Component Model (CCM), Enterprise JavaBeans (EJB), the Microsoft MTS and the new.net architecture, or some other component- or transaction-based platform, the first step in constructing an MDA-based application will be to create a platform-independent application model -- using the UML -- that is consistent with the appropriate core model. Then, platform specialists can convert this general application model into one targeted to a specific platform such as CCM, EJB, or.net. Figure 1 shows these target platforms in the thin ring surrounding the core.

15 Figure 1: OMG's Model Driven Architecture Although standard mappings will allow tools to automate some of the conversion, in most cases some hand coding will be required, especially in the absence of MDA tools. As users and tool builders gain experience, and techniques for modeling application semantics become better developed, less human intervention will be needed. The platform-specific model faithfully represents both the business and technical run-time semantics of the application. It's still a UML model, but it is expressed (because of the conversion step) in a dialect (i.e., a profile) of UML that precisely mirrors technical run-time elements of the target platform. The semantics of the platform-independent original model are carried through into the platform-specific model. The next step is to generate application code itself. For component environments, the system will have to produce many types of code and configuration files, including interface files, component definition files, program code files, component configuration files, and assembly configuration files. The more completely the platform-specific UML dialect reflects the actual platform environment, the more completely the application semantics and run-time behavior can be included in the platform-specific application model, and the more complete the generated code can be. In a mature MDA environment, code generation -- provided through tools from vendors such as Rational Software and their competitors -- will be substantial or perhaps even complete in some cases. Early versions are unlikely to provide a high degree of automatic generation, but even initial implementations will simplify development projects and represent a significant gain, on balance, for early adopters; they will be using a consistent architecture for managing the platformindependent and platform-specific aspects of their applications. As Figure 1 shows, many of today's connection technologies will be

16 integrated by the MDA, with room for tomorrow's "next best thing." CORBA represents the best middleware choice because it is vendor- and language-neutral, and bridges easily to all of the other middleware environments. To accommodate those enterprises with multiple middleware platforms on their network, however, many non-corba platforms will be incorporated into the MDA. One of the first will be the Java-only EJB. Adding New Middleware Platforms Because the MDA is platform-independent at its core, adding new middleware platforms to the interoperability environment will be straightforward: After identifying the way a new platform represents and implements common middleware concepts and functions, OMG members can incorporate this information into the MDA as a mapping. Various message-oriented middleware tools, plus XML/SOAP (Simple Object Access Protocol) and.net will be integrated in this way; in fact, by rationalizing the conflicting XML document type definitions (DTDs) that are being proposed in some industries, the MDA can even help organizations interoperate across them. And, as representations of multiple middleware platforms are added to the MDA and mature over time, the generation of integration tools -- bridges, gateways, and mappings from one platform to another -- will become more automated. Interoperability will be most transparent within an application category: enterprise applications with other enterprise applications; real-time applications with other real-time applications. This follows from our approach of a separate core model for each category; differences between application categories prevent us from basing all applications on a single core model. But identifying and exploiting concepts common to two or more categories can smooth over the boundaries to some extent. Working with Legacy Applications Our discussion so far has assumed that we were building an application -- and its model -- from scratch. Legacy applications present different challenges: Many were built before component environments were even conceived and do not fit neatly into any of our core models. Legacy applications may be brought into the MDA, however, by wrapping them with a layer of code that is consistent with an MDA core model. If we build an MDA model of the wrapper first, then the outer portion of that wrapper - - the one that faces the network and interoperates with our other applications and services -- may be generated automatically, at least in part. The other side of the wrapper -- the one that invokes and returns from the legacy application itself -- typically must be hand coded. An Internet ORB As a next-generation OMG standard currently in development, the MDA can serve as an Internet Object Request Broker (ORB), integrating across all middleware platforms, past, present, and future. OMG, the organization

17 that knows ORBs better than any other, is ideally suited to extend this concept beyond middleware standards to a middleware-neutral, modeldriven approach, offering users these specific advantages: Organizations will be able to build new MDA-based applications using the middleware of their choice. They will have the security of knowing that the essential semantics of their application have been systematically distilled into a platform-independent model, and that any future migrations they might need to make to different middleware (or even new versions of the same middleware) will be reasonably manageable. In addition, they can produce interoperability bridges and gateways to other MDA-based applications within an enterprise as well as interconnections with customers, suppliers, and business partners in a methodical way, using a consistent architecture and some degree of automatic generation. Legacy applications -- the ones that keep your business in business - - will interoperate with an organization's current applications once they wrap them as we described and incorporate their functions into the MDA. They can remain on their established platforms; the MDA will help automate construction of bridges from one platform to another. Industry standards for all verticals will include platform-independent models defined in terms of the MDA core models: standard facilities performing standard functions, that you can buy instead of build, with interoperability and evolvability improved by their MDA roots. We'll describe these facilities and their role below. As new middleware platforms emerge, the OMG's rapid, consensusbased standardization process will incorporate them into the MDA by defining new standardized mappings. MDA tools will thus be able to target additional platforms for conversion to a platformindependent model. These tools will also be able to support bridges to the new platforms. Developers will gain the ultimate in flexibility: the ability to regenerate code from a stable, platform-independent model as the underlying infrastructure shifts over time. ROI will rise from the reuse of application and domain models across the software lifespan, especially during long-term support and maintenance, the most expensive phase of an application's life. Models are built, viewed, and manipulated via UML, transmitted via XMI, and stored in MOF repositories.

18 Formal documentation of system semantics (through modeling) will increase software quality and extend the useful lifetime of designs (thereby increasing ROI). Taking advantage of our standards and tools that exploit them, OMG members have this integration task well underway. They are defining the Enterprise Computing Core Model and mapping it to the most widely-used middleware platforms. They are also defining a core model for real-time computing. Standardizing Domain Models Since January 1996, a sizeable percentage of OMG members have been meeting in Domain Task Forces (DTFs), communities focused on standardizing services and facilities in specific vertical markets. Until now these specifications have consisted of interfaces written in OMG Interactive Data Language (IDL), with accompanying semantic descriptions in English text. Standardizing components at a platform level, as we have done with CORBA, is certainly a viable contribution to solving the integration and interoperability problem, but we are now prepared to go a step beyond that. A well-conceived service or facility is always based on an underlying semantic model that is independent of the target platform, even if that model is not documented explicitly. OMG's domain specifications fall into this category because the models for them are not expressed separately from their IDL interfaces. Since their models are hidden, these services and facilities have received neither the recognition nor the widespread implementation and use that they deserve outside of the CORBA environment, especially considering the quality of their underlying models. Extending these implied models outside of CORBA just makes sense. The OMG has already implemented The Healthcare Resource Access Decision Facility, for example, in Java and EJB as well as CORBA. And there are more underway, as shown in Figure 1. Basically, each DTF will produce standard frameworks for standard facilities in their application space. These will be formulated as normative, platform-independent UML models augmented by normative, platformspecific UML models and interface definitions for at least one target platform. Their common basis in MDA will also promote partial generation of implementation code, but that code, of course, will not be standardized. For manufacturing, for example, the DTF could produce normative MDA UML models, IDL interfaces, Java interfaces, XML DTDs, etc. for CAD/CAM interoperability, PDM (Product Data Management), and supply chain integration (see Figure 2). Once these models are completed and adopted, their implementation can be partially automated in any middleware platform supported by the MDA.

19 Figure 2: UML-Based Model Frameworks for Manufacturing The three facilities in our example -- CAD/CAM, PDM, and Supply Chain -- would benefit from the interoperability that only the MDA can provide. Because CAD/CAM and PDM applications are tightly integrated, they are likely to be implemented by an individual enterprise or software vendor in, for example, CORBA or EJB. Supply chain integration, by contrast, is more of an inter-enterprise function, so we might expect an XML/SOAP-based implementation supported by an industry market-maker or trade organization to become popular. It will be essential to interoperate among the three, however: CAD/CAM designs feed into PDM production facilities that drive the supply chain; in turn, the supply chain will refer back to CAD/CAM for details on a particular part. If all three functions start out as UML models in the MDA, we may eventually be able to generate a significant portion of the implementation for each on its preferred platform, as well as the bridges we need to integrate each of the facilities with the other two. Including Pervasive Services Enterprise, Internet, and embedded computing rely on a set of essential services. The list varies somewhat depending on the source but typically includes directory services, event handling, persistence, transactions, and security. In addition, computing systems or applications may take on specialized attributes in either their hardware or software -- that is, they may be scalable, real-time, fault-tolerant, or designed to fit into a confined (embedded) environment. When these services are defined and built on a particular platform, they necessarily take on characteristics that restrict them to that platform, or ensure that they work best there. To avoid this, OMG will define Pervasive Services at the platform-independent model level in UML. Only after the services' features and architecture are fixed will platform-specific definitions be generated for all of the middleware platforms supported by the MDA. At the abstraction level of a platform-independent business component model, services are depicted at a very high level (similar to the view the component developer has in CCM or EJB). When the model is mapped to a particular platform, developers will use their development tools of choice to generate code (or dynamically invoke it) that makes calls to the native services of those platforms. The pervasive services will be visible only to

20 lower-level applications, i.e., those that write directly to services. Hardware and software attributes -- scalability, real-time, fault tolerance, or embedded characteristics -- will be modeled as well. By defining UML representations for these attributes or, in the case of fault tolerance, for an environment that combines the attribute with enterprise computing, OMG will extend the MDA to support and integrate applications with these desirable characteristics. Figure 3: MDA Encompasses Pervasive Services and Specialized Computing Environments. Figure 3 emphasizes that pervasive services are available to all applications, in all environments. True integration requires a common model for directory services, events and signals, and security. By clarifying that these services are implementable in different environments and easily integrated, MDA represents our goal of universal integration: it becomes a global information appliance. An Invitation From the OMG Although much of the infrastructure for the MDA is in place or under construction, there is still a lot to do. If your company works at either the modeling or infrastructure level, you can have a voice in defining the MDA. Requests for Proposals (RFPs) have been issued for UML 2.0, and all of the components of the Business Objects Initiative (BOI) except the first are

21 still in their formative stages in the OMG adoption process. Of the mappings to various middleware environments, only that to CORBA is even in progress; the rest exist only as potential RFPs. UML models for the pervasive services have not yet been constructed or adopted. Application models defined by the DTFs will form the basis for implementations extending from CORBA to every middleware environment. Whether your company is a provider or user of domain-level applications, now is the time to get involved in their standardization. As a provider, you can maximize your impact on future standards and be recognized as a key player. As a user, you can integrate your company's requirements into the RFP that defines the new standard and influence the models and standards that you will eventually use. You will also enjoy working with the best and brightest in the industry to develop your architecture of choice. One condition: To ensure that OMG standards remain relevant to the marketplace, companies whose submissions are adopted by OMG members as a standard must agree to market or commercially use an implementation of the specification. With MDA, the OMG is continuing its quest to support integration and interoperability across heterogeneity at all levels. Our first goal -- to enable integration by introducing a distributed object model -- is complete. Today, objects are at the core of every vendor's enabling architecture and all e-businesses. But our integration mission is not yet fulfilled; now, we must evolve from a middleware-centric to a modelingcentric organization. That does not mean, of course, that we are leaving CORBA behind. CORBA is a foundation of this new architecture. As the only vendor- and languageindependent middleware, it is a vital and necessary part of the MDA superstructure; software bridges would be hard to build without it. To give this superstructure maximum extensibility and move the reuse equation up one level, however, we must focus on expressing the architecture completely in terms of modeling concepts. Another building block of this new architecture is a more concentrated focus on conformance testing, certification of programmers, and certification of products (branding). For this we will leverage the work of our current Analysis and Design Task Force, which has undertaken testing and branding projects relating to the UML, the MOF, XMI, and CWM, and is now working on the BOI and UML representation of Enterprise Application Integration (EAI). Ultimately, of course, the success of our efforts in these areas will depend on strong relationships with outside organizations with relevant expertise. 1 The OMG calls these models UML Profiles. A number of these profiles are already well along their way to standardization. 2 In technical terms, it is a metamodel of the category.

22 Copyright Rational Software 2001 Privacy/Legal Information

23 Monitoring Object Creation in Java Application Profiling with Rational PurifyPlus by Goran Begic Technical Marketing Engineer, Development Solutions Rational Software B.V. The Netherlands are called "memory leaks." In the January issue of The Rational Edge, I wrote about memory leaks in Java applications and possible approaches for detecting and resolving them. As I explained, contrary to common myths about the Java garbage collection mechanism s infallibility, there are objects in memory that the garbage collector cannot reach, and those objects Another potential performance bottleneck -- extensive memory usage -- is much simpler to explain: While the garbage collector is cleaning memory from unused objects, the performance of the application executed through the Virtual Machine declines. Therefore, although the use of objects is at the very heart of object-oriented programming, and Java is definitely an OO language, an excessive number of objects can seriously decrease a Java application s effectiveness. For this reason, it is important to monitor the level of object creation for Java application profiling. In this article we ll take a look behind the garbage collection curtain at the world of object creation, and see how Rational Purify can help keep it under control. Java Objects and Object Creation There are many books written about objects in Java because Java programming is all about objects. Objects can be defined as instances of classes, which in turn can be considered templates for objects; classes define all features for a family of objects. When a new object gets created on the heap by calling new(), the first thing that happens is that a chunk of a memory space on the heap gets allocated. After that, the constructor for the object class is called. If the constructor is not specifically defined, then the default object will be used. Additionally, the object fields get automatically initialized, and the garbage collection marks both the object reference and the object in its list of objects and references. This means that every time we call new() we get more than a reference to the memory location; we actually get an initialized object.

24 A large number of small objects, especially if they are created in loops that are called often in the application, can be a major performance bottleneck. In Java (unlike in C++), you cannot allocate large memory areas simultaneously for multiple temporary objects. Instead, you must follow the same creation/destruction process for every single temporary object in memory. Temporary local objects can also create problems. Java doesn t use the stack for such allocations in the same way that C++ does. Instead, it creates all temporary objects on the heap, but not all of them get released automatically. They are released after all the references to them are removed and the garbage collector decides to clear them. If you have a large number of objects, then the garbage collector has to be executed more often, and the program takes longer to execute. To better understand the correlation between the number of objects created by an application and application performance, let s look at String objects. String Objects One characteristic that makes string objects special is their immutability. To modify a String object, you must create a new object and leave the previous one for garbage collection. This can become a problem if you repeat the operation frequently in your application. For example, take a very simple statement like the one below: public MethodA(String Expression); The method takes a string object as a parameter, and we will assume that it also returns a string object, since that is very typical for interface objects. Since String objects are immutable, at least two temporary objects must be created every time the method is executed. That alone does not represent a performance issue, but it becomes an issue if you call this method several thousand -- or several hundred thousand -- times during the run. If you call the function 5,000 times, for example, then the number of created objects for such a method will be at least 10,000. Also, there will have to be 10,000 calls to the constructor of the object, and so on. String objects are closely related to StringBuffer objects; often, StringBuffer temporary objects are invoked implicitly, as in this example of the String concatenation: String str3 = str1 + str2 + str3; What actually gets executed with this statement is the allocation of the new StringBuffer object to which the str1, str2, and str3 will be appended and written to a new String when executing the tostring() method on the new StringBuffer object. Please note that none of the temporary String objects is reused. After they serve their purpose, the unnecessary objects will remain in line, waiting for the mighty garbage collector to pick them up. Again, if such a String concatenation is executed in a loop, the result will be a large number of objects, more garbage collections, and a decline in performance. Although creation of the String objects takes time, it also has advantages --

25 internationalization support and compatibility with existing interfaces, for example. On many occasions there is no way to avoid Strings. Collecting Information About Object Creation There is more than one way to monitor for excessive object creation. We will examine each option below. Java Virtual Machine Profiling Interface (JVMPI) The Sun Java Virtual Machines are equipped with powerful profiling interfaces through which it is possible to collect information about events and the heap usage of the JVM. Among the types of events that can be monitored through the JVMPI are: Byte code instruction execution Loading and unloading of classes Garbage collections Exceptions JIT method compilation Method calls Object monitoring Thread creation and destruction Microsoft Java Virtual Machine doesn t support JVMPI as it is defined in the Java 2 specifications. The Microsoft solution is based on COM interfaces and callbacks. The techniques for using it are rather different than those for Sun Java JVMPI, but it can monitor all of the events listed above. The Microsoft solution also provides information about the source line execution. Byte Code Insertion The additional way of collecting information about the program execution is directly from the byte code. It enables Purify for Java to collect the source line execution information, for example, regardless of the Java Virtual Machine used. The technique is called Byte Code Insertion (BCI). The name resembles a Rational patented technique called Object Code Insertion (OCI), which is used with native compiled applications. Java class files are meant to be quickly transmitted through the network to final users. In order to achieve a high level of portability, the Java code is compiled into intermediate byte code that is then interpreted by the virtual machines specific to the platforms the code is running on. Byte code execution and its translation into machine code are not included in the Java specification. Testing tools like Rational Purify for Java add instructions to the intermediate byte code that will enable it to monitor the execution of the Java application.

26 The fact that all necessary symbols are included in the class files simplifies instrumentation of the byte code. The Java VM will link such instrumented code and execute it; the VM is not aware of changes in the byte code that took place between compiling and execution. Object Creation Profiling The JVMPI interface specifications are public, and any third party vendor can use them to gather data for their tools 1. The most rudimentary tool that takes advantage of the JVMPI is called hprof. It is a DLL shipped and installed with the Sun JDK. When loaded, hprof "listens" to JVM events and writes a log file with information about method execution times and heap usage. Unfortunately, this log file is very difficult to use; it can be extremely large because of the large number of events that JDK is processing. There are two formats for the dump file: ASCII and binary, which requires a special tool to analyze the results. Test Application In a book called Java Pitfalls, you can find the Java source file for the application that we will use to demonstrate these object creation profiling methods and tools 2. The References section at the end of this article contains further interesting reading material on the topic. The test application is simple and straightforward. Its main purpose is to create a large number of objects. The main class, Library, creates an array of objects of the type BookShelf. Each BookShelf object creates an array of objects of the type Book, and every Book object creates an array of TextPage objects. After a sufficient number of iterations, the memory used for all those objects exceeds the space available for the application; the program runs out of memory and throws the appropriate exception. Hprof can be invoked from the command line by specifying an additional option for the java executable that will load the hprof.dll. D:\> java -classic -Xrunhprof Library The resulting log file is written at the end of the run. The command line prompt includes a notification from the hprof: "Dumping Java heap... allocation sites... done." The log file name is java.hprof.txt by default and is placed in the directory of the executed application. Please note that the size of the ASCII log file for this run with 1,287 objects is bigger than 4Mbytes. Here is an example of the list of objects created on the heap as recorded with hprof: SITES BEGIN (ordered by live bytes) Wed May 30 09:04: Rank Percent live alloc'ed stack class trace name

27 self accum bytes objs bytes objs % 9.24% [C % 18.47% [C % 25.03% [C % 29.64% [B % 34.26% [B % 38.81% [S % 42.90% [C % 45.42% [C % 47.45% [L<Unknown>; % 48.88% [I % 50.30% java/lang/class The list of objects is sorted per live bytes allocated for each object, but it doesn t, for example, provide information about the memory allocated for the object and all its descendants, because they will all stay in memory as long as the observed object contains a valid reference to its descendants. Such objects are important to trace, since they may not impact the size of the allocated memory directly, but through their descendants. In the hprof log file, more information about a particular object can be obtained from the trace information. It is called trace information because of the technique used for sampling information about the executed methods. The profiling tool, in this case hprof, takes regular snapshots of the call stack and calculates statistics for all the methods it finds there. For example, let s look at the object on top of the list, which is of the type Array of Characters (as shown in the last column with the symbol [C). The trace number 1226 with the stack trace of the methods that were called prior to the object creation gives us the following info: TRACE 1226: java/io/bufferedwriter.<init>(bufferedwriter.java:94) java/io/bufferedwriter.<init>(bufferedwriter.java:77) java/io/printstream.<init>(printstream.java:85) java/lang/system.initializesystemclass(system.java:820) What we can learn from the stack trace is that this object of the type Array of Characters was created to buffer a stream of characters. Its size, however, didn t cause the Out Of Memory exception. It was the large number of books on the bookshelves that killed the execution.

28 There is no doubt that hprof offers some interesting information about the object creation, but it is extremely difficult to use. Probably the best way to use it would be to collect the information about the heap usage in the binary file and write a custom tool that would display and highlight the important information. We can also go a step further and look at a commercial solution for Java Memory Profiling: Rational Purify for Java. Some vendors refer to memory profiling tools as "memory debuggers." In order to provide detailed and comprehensive information about memory usage, Purify for Java uses both Java 2 JVMPI and Byte Code Insertion technique. Object Profiling with Rational Purify for Java Purify for Java can be invoked from the command line by specifying the Rational JVMPI library instead of the hprof tool: D:\> java -classic XrunPureJVMPI:Purify Library It tells the Java executable to load Purify for Java DLL, (PureJVMPI.dll) and to start the memory profiling. The option Purify tells the dll to use Purify for Java as the target tool. (Specifying Quantify as the target for the data collection would, instead, launch Rational Quantify, the tool for execution time analysis). The option "Classic" specifies a java executable to load the classic version of the Java Virtual Machine rather than the default, performance optimized "Hot Spot" version of the Java Virtual Machine. Future versions of Purify for Java will support the "Hot Spot" Virtual Machine from the Java Development Kit onwards. Identifying Memory Leaks When the application starts, in the command prompt we can see the objects being created: E:\Gogo\java\Library>java -classic -Xrunhprof Library Creating BookShelf: 1, subject=sports Creating BookShelf: 2, subject=new Age Creating BookShelf: 3, subject=religion Creating BookShelf: 4, subject=sci-fi Creating BookShelf: 5, subject=romance Creating BookShelf: 6, subject=do-it-yourself The application continues to run, and while creating the ninth BookShelf object, it will exceed the maximum memory available for the application and raise an exception: Exception in thread "main" java.lang.outofmemoryerror: This is a good moment to take a snapshot of the memory profile. We can use snapshots that we record during the run to examine the object creation. In my article about searching for memory leaks in the January issue of The Rational Edge, we saw that by comparing these snapshots, you can determine whether the Java garbage collection cleaned unused objects in memory.

29 This time, however, we do not need to compare two snapshots to find the memory problem. It is clearly visible within the Memory Tab of the basic Purify Data Browser window, as shown in Figure 1: Figure 1: Rational Purify Displaying Memory In Use Click here for full size image. Memory usage for objects allocated for bookshelves after the sixth one -- where we took the first snapshots -- grows significantly, despite garbage collection. In this case we are not dealing with memory leaks, but rather with memory in use. All the objects in memory have valid references to them, so they are not eligible for garbage collection. Pinpointing the Source of Excessive Memory Consumption Another view that Purify for Java offers us is the Call Graph. The Call Graph highlights the application s "hot spot": the chain of calls where most of the memory was allocated, as shown in Figure 2.

30 Figure 2: Rational Purify Showing the Chain of Calls with Greatest Memory Usage Click here for full size image. Unlike with hprof, this time we are really on the way to learning more about the memory used for the numerous objects created during the run. The methods displayed on the Call Graph are constructors for instances of the class files BookShelf, Book, and TextPage. This capability is new to Rational Purify for Java. As you can see, Version 2001 offers much more than just an overview of the methods where memory is allocated: It actually provides a view into Java objects and object references; it allows us to see all the objects that were "live" at the moment of the snapshot. Finding More Information About Objects and Their References Let s continue on the road to monitoring object creation by choosing the constructor for the Library object from the list of methods that were executed during the run, as shown in Figure 3.

31 Figure 3: Rational Purify Showing Details for a Method Click here for full size image. Besides information about memory allocated in the method and the list of callers and direct descendants, now there is a list of objects created in the method, along with the number of references to these objects. The list is a root for the created objects, an array of the type BookShelf. It stores the references to all seven BookShelf objects that are listed below the root. A quick look at the source file from within the Rational Purify for Java window (Figure 4) confirms that these objects are created in the main method by calling the constructor for the new BookShelf object. Figure 4: Source Code Line for Creating a New Bookshelf Object The BookShelf objects array is also easy to recognize (Figure 5). An array in Java is a special object that keeps references to the actual objects of the type for which the array is created.

32 Figure 5: BookShelf Objects Array in Source File If we open the Uknown array object and look at it in the Object Detail window (Figure 6), then the story about the array object and the references to the real object becomes very clear: Figure 6: Detail View of an Array Object Click here for full size image. The lower right portion of the screen in Figure 6 shows the Object Data Window, with references to the array of Books. Each BookShelf object contains the reference to the array of Book objects. Rational Purify for Java displays only one reference per array element to keep the overview clean. If you drill down into the Unknown class -- which is actually an array of Book objects -- then you can see all the references and relationships among the objects in the Object Data Window. Then, double-clicking on a reference will lead you directly to the object behind that reference. If we choose for example, the third element of the BookShelf array (referenced as 21C7F948), then we will see that it has two references. One is a reference to the array of Books (21C7F958), and the other is a reference to the String object. If we continue exploring the latter reference, we will see that it is a character array with the content Religion, and that is indeed the third BookShelf.

33 There are numerous ways to exploit the capabilities of a commercial memoryprofiling tool like Purify for Java. One of them would be to monitor the size of the largest objects and descendant groups of objects by taking several snapshots throughout the run of the application. You could then identify the objects that consume the most memory, as well as the numbers of certain objects. Optimizing Memory Usage in Java Applications We have seen how excessive object creation in Java can cause problems and how a memory profiling tool such as Rational Purify for Java can help programmers optimize memory usage in Java applications. Memory profiling tools can also measure and document the effects of code changes on an application s overall performance, and the references listed below provide many suggestions and solutions for improving application performance. By following these suggestions and using the right tools throughout development, programmers can help ensure greater success for Java projects. Footnotes 1 Java Virtual Machine Profiler Interface specification (JVMPI): 2 Michael C. Daconta (Editor), Eric Monk, J. Paul Keller, Bohnenberger, Keith Bohnenberger, "Java Pitfalls", John Wiley & Sons, April References 1. Memory Profiling in Java 2. Java Virtual Machine Profiler Interface specification (JVMPI): 3. JavaSoft HAT: JDCBook/perf3.html#profile 4. Rational Developer Tool documentation: 5. Michael C. Daconta (Editor), Eric Monk, J. Paul Keller, Bohnenberger, Keith Bohnenberger, "Java Pitfalls", John Wiley & Sons, April 2000.

34 6. Steve Wilson and Jeff Kesselman, "JAVA Platform Performance, Strategies and Tactics." Sun, Jack Shirazi, "JAVA Performance Tuning". O'Reilly, Craig Larman and Rhett Guthrie, JAVA 2, Performance and Idiom Guide. Prentice Hall, Patrick Niemeyer and Jonathan Knudsen, "Learning Java," O'Reilly, Peter van der Linden, "Just JAVA." SunSoft, For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you! Copyright Rational Software 2001 Privacy/Legal Information

35 A Strategy for Managing Multiple Baselines with Rational ClearCase by Jim Ressler Senior Project Manager Raytheon Company One of the most powerful features of Rational ClearCase TM is its ability to create views into multiple versions of many elements, based upon a conditional set of expressions called a configuration specification, or config spec for short. To truly master ClearCase, users must understand how to take advantage of this capability. This article presents an approach for managing several baselines, using a strategy that includes promoting elements to the next release and giving users views into those baselines. A typical software baseline changes frequently as new versions are developed, tested, and checked in to create the next release. In addition, fixes are continually being made to the current version as well as to past versions. Typically, at any given time at least three versions of the software co-exist, and Rational ClearCase provides views of these versions to people with different roles (see Figure 1). Figure 1: Software Versions and Views in Rational ClearCase

36 Promoting Files to an Intermediate State As files undergo changes and become ready for the next release, you can use labels and the config spec to promote them to an intermediate state in ClearCase. Suppose you assign the following labels: REL1.0 - current release REL2.0 - next release In between these releases, you can label new and modified files with BUILD labels. For example: BUILD2.1 first build for REL 2.0 BUILD2.2 second build for REL 2.0 BUILDX.Y where X is the next release and Y is the build number You can create these labels using the mklbtype command in ClearCase. Commonly, the config spec for development looks only at the LATEST and checked-out versions. Unfortunately, this means it views other people s changes before they have been tested. When a developer modifies source files, he will check out the version from either the latest release or the new build, if it has been changed. So the developer s view is of a gradually moving baseline, in which he can see not only his own changes, but also the tested changes of others. This window of change is controlled by applying the BUILDX.Y label and setting the config spec accordingly. Using these labels, the config spec will apply a succession of rules to see the files labeled for a particular view, as shown in Figure 2. Figure 2: Files Labeled for Particular Views

37 The config specs should be maintained in files kept in a common directory and used by the project team when entering their view with the ClearCase setview command. Promoting Files to a Build When a developer has completed her changes, performed a unit test, and checked the files in, then the files can be promoted to the build by running this command for each file: cleartool mklabel rep BUILDX.Y <filename>/version These commands should be run after the developer has submitted files for incorporation into the next release, and the files have been approved by the software lead. When developers fix problems or add features, they may use an additional label for the problem number in order to audit changes in the next release. Only the build and release labels, however, are used in the config spec to control the view. To capture intermediate versions between releases (a patch, for example), you can use a subsequent build label. In such cases, you should lock the previous build label to prevent further changes: cleartool lock c "BuildX.Y frozen" lbtype:buildx.y In addition, your config spec should change to pick up the next build, BUILDX.(Y+1), as shown for Build 2 in Figure 2. At any time, developers can see the current files submitted to the build by searching for the build label. When it is time to create the next release, they should check in and label any outstanding changes that need to be in the build. They can also use the following command to find anything that is checked in but not labeled: cleartool find <root directory> "{(version(/main/latest)) && \! ((version(build2.2)) (version(build2.1)) \ (version(rel1.0)) )}" -print Creating a New Release When you create the new release, all of the BUILDX.Y labels should be locked (using the cleartool lock command). The Configuration Management administrator should perform a final recompile using the clearmake command. Using the last build s config spec (such as the one for Build 2 above), all files in the new release (from both the previous release and all the builds) can be labeled with the following command: cleartool mklabel recurse REL2.0 <root directory> After that, the config spec for the configuration manager should be changed to point only to the new release: element * REL2.0 nocheckout

38 A final acceptance test should be run before installing the new baseline. Any fixes made during the test can be incorporated by creating another BUILDX.Y label and adding it to the config spec. Once you ve completed these steps, the new release is ready for shipment, and you can repeat the process for the next release. Limitations of This Approach Note that this approach does not encompass parallel development of multiple baselines for the same release. You can accomplish this by using additional labels in conjunction with ClearCase s branch and merge functions, but I ll save that for another article. If you apply the strategy I ve described above, then you will be well on your way to having a controlled baseline for developers, Configuration Management, and project leadership to work on together. Questions or comments about the ideas in this article? Send them to Jim Ressler via mperrow@rational.com. For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you! Copyright Rational Software 2001 Privacy/Legal Information

39 Growing Into Your SCM Solution from Software Configuration Management Strategies and Rational ClearCase: A Practical Introduction by Brian A. White and Geoffrey M. Clemm (Addison Wesley, 2000) Last year, Rational s Brian White published a definitive book on the engineering discipline of software configuration management (SCM). In it, he explains how Rational ClearCase automates and supports SCM best practices through unified change management (UCM), Rational s approach to managing change throughout the software development lifecycle, from requirements to release. In this second chapter of Software Configuration Management Strategies and Rational ClearCase: A Practical Introduction, reprinted in its entirety by permission from Addison Wesley, White discusses the increasing complexity of software development projects and the consequent demand for richer SCM support. While reviewing the history of SCM tool evolution, he looks at five categories of software projects, ranging from those developed by a single individual to projects with multiple, geographically distributed project teams. The material in this chapter lays the groundwork for a detailed discussion in Chapter 3 of UCM, a topic that The Rational Edge will also examine closely in future issues. Chapter 2 pdf file (1131K) For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you! Copyright Rational Software 2001 Privacy/Legal Information

40 Automated Testing: A Silver Bullet? by Dawn Haynes Automated Testing Evangelist Rational Software In 1986, Frederick P. Brooks, Jr. wrote a paper called "No Silver Bullet -- Essence and Accidents of Software Engineering." This paper conveyed some of the expectations that folks had about advances in software engineering technologies and contrasted them with the realities. His argument can be summed up as follows: There is no single development in either technology or management technique, which by itself promises even one orderof-magnitude improvement within a decade in productivity, in reliability, in simplicity. Brooks encourages us to think of technologies and techniques as more evolutionary than revolutionary. When it comes to thinking about introducing automation to any kind of testing effort, I would like to encourage a similar approach. In the five years I have worked with potential customers of automated testing products and solutions, I have encountered a significant amount of "silver bullet" thinking. This manifests itself through assumptions such as: 1. We will be able to automate all testing! 2. Test automation will increase productivity so much that we ll be able to do all the testing with fewer people (eliminate staff).

41 3. Test automation is so easy that we won t need to do any training. 4. Automation will reduce our whole testing workload. 5. We won t need to do any test planning. 6. Doesn t automation make human testers "obsolete" or "redundant"? 7. That time-intensive test design effort will no longer be necessary. Although I hate bursting people s bubbles, I have always felt compelled to help them understand the difference between implementing automated testing and attaining the Holy Grail. Most often, this means explaining what automated testing actually is, and what automated testing tools and solutions can actually do. Are You Saying That Automated Testing Is NOT a Silver Bullet? That s the idea. Automated testing -- or the implementation of test automation strategies and tools -- is just one big hammer in the tester s toolbox. Notice that I said it s a tool and that its place is in a toolbox. I m purposefully avoiding equating automation with human testers: there s just no replacing those. Still, there is no question that test automation is powerful stuff and can provide benefits in terms of efficiency and thoroughness. The key is determining when and how to use its power. Let s begin doing that by posing another question. Is There Ever Enough Time to Test Everything? My guess is that the answer to this question is a universal and resounding "No!" There's always one more thing we could test or another platform or configuration we'd like to try. But as deadlines and ship dates draw closer, the time allocated for each testing cycle shrinks. So, how do software development project managers and testing groups deal with this? Typically they reduce the amount of testing they do for each cycle leading up to release. Have you ever experienced this? Ideally, it would be better to do some risk-based analysis to determine what to eliminate; more often, however, teams just narrow the focus of the entire testing cycle to verifying fixed defects. And often there isn t even enough time to complete this reduced testing plan. How many products get shipped only when testing is complete? I don t hear about such scenarios very often. Usually teams look at other factors when

42 they make a ship/don t ship decision: Have we run out of time? Run out of budget? Run out of resources? Run out of pizza and beer? Unfortunately, when testing is cut off arbitrarily, the development team doesn t know enough about the product s overall quality, and they run the risk of shipping serious problems. Is this a dilemma we could resolve by applying the power of automated testing? Let s investigate. How Can Automated Testing Help? Before you build a plan to implement automation, you should understand how you define it. In other words, what does automation mean to you? Here are a few ways I ve heard people describe automated testing: 1. Testing that requires no human intervention at all. 2. Test scripts. 3. Test tools. 4. I don t know. Sometimes people interpret the notion of automated testing too narrowly, focusing only on test scripts generated by tools or by programming. In fact, automation can have a much more expansive meaning. Consider this definition from a Quality Engineering group that is building a set of test automation guidelines: Automation, in our context, is the use of strategies, tools and artifacts that augment or reduce the need for manual or human involvement or intervention in unskilled, repetitive or redundant tasks. In addition to this definition, the guidelines provide examples of automation methods the group employs, a few of which are listed in Chart 1. Chart 1: Automation Methods for Testing Automation Method Template Description An outline of an artifact, usually containing formatting and guidance for adding content. Used as a starting point for creating an artifact. Example Test case or test plan template (created internally, based on a sample in a book, or taken from a third party tool)

43 Test Scripts Images Macros Batch Files Machine readable/executable instructions that (typically) automate the execution of a test. May be generated by a tool and hand coded. Compressed files or backups that are used to quickly return an environment to a predetermined state. (In preparation for manual or automated testing.) Machine readable/executable instructions (usually in the context of a specific application) that automate the execution of a specific task or set of tasks. Machine readable/executable instructions (usually in the context of an operating system or Integrated Development Environment (IDE) that automate the execution of a specific task or set of tasks. Visual Test scripts, Rational Robot scripts, Perl scripts, or other coded executables or Dynamic Link Libraries. Create disk images using third- party tools or backup software. Third-party tool macros (like those from Microsoft Excel), which capture, format, and merge data for management reporting. Instructions used to install/configure specific options using the DOS (or other IDE) command console. Does this small set of examples get you thinking about automation in a different way? Now, it is important to define what automation means to you and your test team. Then you can use that definition to begin building a set of automation guidelines so that anyone on the team can quickly assess, using the same methods, whether a task is a suitable candidate for automation. Creating Automated Testing Guidelines Here are some strategies and issues to consider as you shape your definition and guidelines. 1. Define where automation fits. Target specific areas of the total effort as candidates for automation. Start with highly redundant tasks or scenarios. Automate repetitive tasks that are boring or tend to cause human error.

44 Focus on well-developed and well-understood use cases or scenarios first. Choose relatively stable areas of the application over volatile ones. Enhance automation by using data-driven testing techniques (increase the depth and breadth of testing coverage). Don t make everyone on the test team responsible for automation; designate a few specialists. Know that 100 percent automation is not a realistic goal, and that manual testing will still be essential. 2. Plan to do more testing. Automating repeated tests leaves more time to test using other methods: Increase exploratory testing. Increase configuration testing. Build more automation. Do more manual testing, especially for high-risk features. Plan carefully: decide which tests will be done manually and which tests can be automated -- don t just try to automate everything. Design all tests and document each design. If an automated test cannot be run, ensure that the test can be performed manually instead. 3. Think of automation as an investment. Train users to fully leverage the automated tools.

45 Build a reusable code base. Keep the tests modular and small for easier maintenance. Document the test scripts (code) for verification and reuse. Enforce back-up procedures. Utilize source control. Realize that automation is a software development effort: It often requires code generation. 4. Implement automated testing iteratively. Don t attempt to automate all tests on day one. Gain experience and implement slowly. Start with a small portion of the total test plan and iteratively add to the automation test suite over time (i.e., Ramp up in a realistic and controlled way). What Else Can Automation Do For Me? Although automated testing requires a big up-front investment in terms of planning and training, it does pay off in a number of big ways, too. It can give you: Better quality software -- because you can run more tests in less time with fewer resources. Potential for more thorough test coverage. More time to engage in other test activities, including

46 Detailed planning Careful test design Building more complex tests (data driven, adding code for condition branching or special reporting, etc.) More manual testing, not less!!!! Automated testing also provides intangible benefits. It can give testers: An opportunity to gain new skills (i.e., skill building and learning opportunities). Opportunities to learn more about the system under test because automation can expose internals, like object properties and data. (Better understanding of the system produces better testers.) Now that you know what automated testing is and what it can do, I hope you ll use this knowledge to ensure more and better testing for your products. Although it s no silver bullet, automated testing is a great tool; if you match it with the right jobs, you ll get great results. References For more information on some of the topics mentioned here, please read the following articles from Cem Kaner s Web site: 1. "Architectures of Test Automation" 2. "Improving the Maintainability of Automated Test Suites" 3. "Avoiding Shelfware: A Manager s View of Automated GUI Testing" Acknowledgements Thanks to Cem Kaner for suggesting links to his articles. I d also like to acknowledge Ted Squire of Rational Software and James Bach of Satisfice, Inc., for their careful review and assistance in the development of this article. For more information about Satisfice and its acclaimed testing

47 seminars, please visit For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you! Copyright Rational Software 2001 Privacy/Legal Information

48 Generating Test Cases From Use Cases by Jim Heumann Requirements Management Evangelist Rational Software In many organizations, software testing accounts for 30 to 50 percent of software development costs. Yet most people believe that software is not well tested before it is delivered. That contradiction is rooted in two clear facts: First, testing software is a very difficult proposition; and second, testing is typically done without a clear methodology. A widely-accepted tenet in the industry -- and an integral assumption in the Rational Unified Process (RUP ) -- is that it is better to start testing as early in the software development process as possible. Delaying the start of testing activities until all development is done is a high-risk way to proceed. If significant bugs are found at that stage (and they usually are), then schedules often slip. Haphazard methods of designing, organizing, and implementing testing activities and artifacts also frequently lead to less-than-adequate test coverage. Having a straightforward plan for how testing is done can help increase coverage, efficiency, and ultimately software quality. In this article, we will discuss how using use cases to generate test cases can help launch the testing process early in the development lifecycle and also help with testing methodology. In a software development project, use cases define system software requirements. Use case development begins early on, so real use cases for key product functionality are available in early iterations. According to the RUP, a use case " fully describes a sequence of actions performed by a system to provide an observable result of value to a person or another system using the product under development." Use cases tell the customer what to expect, the developer what to code, the technical writer what to document, and the tester what to test. For software testing -- which consists of many interrelated tasks, each with its own artifacts and deliverables -- creation of test cases is the first

49 fundamental step. Then test procedures are designed for these test cases, and finally, test scripts are created to implement the procedures. Test cases are key to the process because they identify and communicate the conditions that will be implemented in test and are necessary to verify successful and acceptable implementation of the product requirements. They are all about making sure that the product fulfills the requirements of the system. Although few actually do it, developers can begin creating test cases as soon as use cases are available, well before any code is written. We will discuss how to do this, and the advantages you can reap from it, below. An Introduction to Use Cases Use cases are based on the Unified Modeling Language (UML) and can be visually represented in use-case diagrams. Figure 1 shows a use-case diagram depicting requirements for a university course registration system. Figure 1: Use Case Diagram for a University Course Registration System The ovals represent use cases, and the stick figures represent "actors," which can be either humans or other systems. The lines represent communication between an actor and a use case. As you can see, this usecase diagram provides the big picture: Each use case represents a big chunk of functionality that will be implemented, and each actor represents someone or something outside our system that interacts with it. It is a significant step to identify use cases and actors, but now there is

50 more to be done. Each use case also requires a significant amount of text to describe it. This text is usually formatted in sections, as shown in Table 1. Table 1: Format for a Use-Case Textual Description Use Case Section Description Name An appropriate name for the use case (see Leslee Probasco s article in the March issue of The Rational Edge). Brief Description A brief description of the use case s role and purpose. Flow of Events A textual description of what the system does with regard to the use case (not how specific problems are solved by the system). The description should be understandable to the customer. Special Requirements A textual description that collects all requirements, such as non-functional requirements, on the use case, that are not considered in the use-case model, but that need to be taken care of during design or implementation. Preconditions A textual description that defines any constraints on the system at the time the use case may start. Post conditions A textual description that defines any constraints on the system at the time the use case will terminate. The most important part of a use case for generating test cases is the flow of events. The two main parts of the flow of events are the basic flow of events and the alternate flows of events. The basic flow of events should cover what "normally" happens when the use case is performed. The alternate flows of events covers behavior of an optional or exceptional character relative to normal behavior, and also variations of the normal behavior. You can think of the alternate flows of events as "detours" from the basic flow of events.

51 Figure 2: Basic Flow of Events and Alternate Flows of Events for a Use Case Figure 2 represents the typical structure of these flows of events. The straight arrow represents the basic flow of events, and the curves represent alternate flows. Note that some alternate flows return to the basic flow of events, while others end the use case. Both the basic flow of events and the alternative flows should be further structured into steps or subflows Basic Flow Register For Courses 1. Logon This use case starts when a Student accesses the Wylie University Web site. The system asks for, and the Student enters, the student ID and password. 2. Select 'Create a Schedule' The system displays the functions available to the student. The student selects "Create a Schedule." 3. Obtain Course Information The system retrieves a list of available course offerings from the Course Catalog System and displays the list to the Student. 4. Select Courses The Student selects four primary course offerings and two alternate course offerings from the list of available course offerings.

52 5. Submit Schedule The student indicates that the schedule is complete. For each selected course offering on the schedule, the system verifies that the Student has the necessary prerequisites. 6. Display Completed Schedule The system displays the schedule containing the selected course offerings for the Student and the confirmation number for the schedule. Figure 3: Textual Description for the University Course Registration Use-Case Basic Flow of Events Figure 4 shows a few alternate flows. Alternate Flows Register For Courses 1. Unidentified Student In Step 1 of the Basic Flow, Logon, if the system determines that the student ID and/or password is not valid, an error message is displayed. 2. Quit The Course Registration System allows the student to quit at any time during the use case. The Student may choose to save a partial schedule before quitting. All courses that are not marked as "enrolled in" are marked as "selected" in the schedule. The schedule is saved in the system. The use case ends. 3. Unfulfilled Prerequisites, Course Full, or Schedule Conflicts In Step 5 of the Basic Flow, Submit Schedule, if the system determines that prerequisites for a selected course are not satisfied, that the course is full, or that there are schedule conflicts, the system will not enroll the student in the course. A message is displayed that the student can select a different course. The use case continues at Step 4, Select Courses, in the basic flow. 4. Course Catalog System Unavailable

53 In Step 3 of the Basic Flow, Obtain Course Information, if the system is down, a message is displayed and the use case ends. 5. Course Registration Closed If, when the use case starts, it is determined that registration has been closed, a message is displayed, and the use case ends. Figure 4: Textual Description for University Course Registration Use-Case Alternate Flows As you can see, a significant amount of detail goes into fully specifying a use case. Ideally, the flows should be written as "dialogs" between the system and the actors. Each step should explain what the actor does and what the system does in response; it should also be numbered and have a title. Alternate flows always specify where they start in the basic flow and where they go when they end. Use-Case Scenarios There is one more thing to describe before we concentrate on how use cases can be used to generate test cases: a use-case scenario. A use-case scenario is an instance of a use case, or a complete "path" through the use case. End users of the completed system can go down many paths as they execute the functionality specified in the use case. Following the basic flow would be one scenario. Following the basic flow plus alternate flow 1A would be another. The basic flow plus alternate flow 2A would be a third, and so on. Table 2 lists all possible scenarios for the diagram shown in Figure 2, beginning with the basic flow and then combining the basic flow with alternate flows. Table 2: Scenarios for the Use Case Shown in Figure 2 Scenario 1 Basic Flow Scenario 2 Basic Flow Alternate Flow 1 Scenario 3 Basic Flow Alternate Flow 1 Alternate Flow 2 Scenario 4 Basic Flow Alternate Flow 3

54 Scenario 5 Basic Flow Alternate Flow 3 Alternate Flow 1 Scenario 6 Basic Flow Alternate Flow 3 Alternate Flow 1 Alternate Flow 2 Scenario 7 Basic Flow Alternate Flow 4 Scenario 8 Basic Flow Alternate Flow 3 Alternate Flow 4 These scenarios will be used as the basis for creating test cases. Generating Test Cases A test case is a set of test inputs, execution conditions, and expected results developed for a particular objective: to exercise a particular program path or verify compliance with a specific requirement, for example. The purpose of a test case is to identify and communicate conditions that will be implemented in test. Test cases are necessary to verify successful and acceptable implementation of the product requirements (use cases). We will describe a three-step process for generating test cases from a fullydetailed use case: 1. For each use case, generate a full set of use-case scenarios. 2. For each scenario, identify at least one test case and the conditions that will make it "execute." 3. For each test case, identify the data values with which to test. Step One: Generate Scenarios Read the use-case textual description and identify each combination of main and alternate flows -- the scenarios -- and create a scenario matrix. Table 3 shows a partial scenario matrix for the Register for Courses use case. This is a simple example with no nested alternate flows. Table 3: Partial Scenario Matrix for the Register for Courses Use Case Scenario Name Starting Flow Alternate

55 Scenario 1 - Successful registration Basic Flow Scenario 2 - Unidentified student Basic Flow A1 Scenario 3 - User quits Basic Flow A2 Scenario 4 - Course catalog system unavailable Basic Flow A4 Scenario 5 - Registration closed Basic Flow A5 Scenario 6 Cannot enroll Basic Flow A3 Step Two: Identify Test Cases Once the full set of scenarios has been identified, the next step is to identify the test cases. We can do this by analyzing the scenarios and reviewing the use case textual description as well. There should be at least one test case for each scenario, but there will probably be more. For example, if the textual description for an alternate flow is written in a very cursory way, like the description below, 3A. Unfulfilled Prerequisites, Course Full, or Schedule Conflicts then additional test cases may be required to test all the possibilities. In addition, we may wish to add test cases to test boundary conditions. The next step in fleshing out the test cases is to reread the use-case textual description and find the conditions or data elements required to execute the various scenarios. For the Register for Course use case, conditions would be student ID, password, courses selected, etc. To clearly document the test cases, once again, a matrix format is useful, like the one in Table 4. Notice the top row. The first column contains the test case ID, the second column has a brief description of the test case, including the scenario being tested, and all other columns except the last one contain data elements that will be used in implementing the tests. The last column contains a description of the test case's expected output. Table 4: Test Case Matrix for the Register for Courses Use Case Test Case ID Scenario/ Condition Student ID Password Courses selected Prerequisites fulfilled Course Open Schedule Open Expected Result

56 RC 1 Scenario 1- successful registration V V V V V V Schedule and confirmation number displayed RC 2 Scenario 2- unidentified student I N/A N/A N/A N/A N/A Error message; back to login screen RC 3 Scenario 3- valid user quits V V N/A N/A N/A N/A Login screen appears RC 4 Scenario 4- course registration system unavailable V V N/A N/A N/A N/A Error message; back to step 2 RC 5 Scenario 5- registration closed V V N/A N/A N/A N/A Error message; back to step 2 RC 6 Scenario 6- cannot enroll -- course full V V V V I V Error message; back to step 3 RC 7 Scenario 6- cannot enroll -- prerequisite not fulfilled V V V I V V Error message; back to step 4 RC 8 Scenario 6- cannot enroll -- schedule conflict V V V V V I Error message; back to step 4 Notice that in this matrix no data values have actually been entered. The cells of the table contain a V, I, or n/a. V indicates valid, I is for invalid, and n/a means that it is not necessary to supply a data value in this case. This specific matrix is a good intermediate step; it clearly shows what conditions are being tested for each test case. It is also very easy to determine by looking at the Vs and Is whether you have identified a sufficient number of test cases. In addition to the "happy day" scenarios in which everything works fine, each row in the matrix should have at least one I indicating an invalid condition being tested. In the test case matrix in Table 4, some conditions are obviously missing -- e.g., Registration Closed -- because RC3, RC4, and RC5 each has the same combination of Is and Vs. Step Three: Identify Data Values to Test

57 Once all of the test cases have been identified, they should be reviewed and validated to ensure accuracy and to identify redundant or missing test cases. Then, once they are approved, the final step is to substitute actual data values for the Is and Vs. Without test data, test cases (or test procedures) can't be implemented or executed; they are just descriptions of conditions, scenarios, and paths. Therefore, it is necessary to identify actual values to be used in implementing the final tests. Table 5 shows a test case matrix with values substituted for the Is and Vs in the previous matrix. A number of techniques can be used for identifying data values, but these are beyond the scope of this article. Table 5: Test Case Matrix with Data Values Test Case ID Scenario/ Condition Student ID Password Courses selected Prerequisites fulfilled Course Open Schedule Open Expected Result RC 1 Scenario 1- successful registration jheumann abc123 M101> E201 S101 Yes Yes Yes Schedule and confirmation number displayed RC 2 Scenario 2- unidentified student Jheuman1 N/A N/A N/A N/A N/A Error message; back to login screen RC 3 Scenario 3- valid user quits jheumann abc123 N/A N/A N/A N/A Login screen appears RC 4 Scenario 4- course registration system unavailable jheumann abc123 N/A N/A N/A N/A Error message; back to step 2 RC 5 Scenario 5- registration closed jheumann abc123 N/A N/A N/A N/A Error message; back to step 2 RC 6 Scenario 6- cannot enroll -- course full jheumann abc123 M101 E201 Yes M101 full Yes Error message; back to step 3 S101 RC 7 Scenario 6- cannot enroll -- prerequisite not fulfilled jheumann abc123 M101 E201 S101 No for E201 Yes Yes Error message; back to step 4

58 RC 8 Scenario 6- cannot enroll -- schedule conflict jheumann abc123 M101 E201 S101 Yes Yes E202 and S101 conflict Error message; back to step 4 Putting It All Together In current practice, use cases are associated with the front end of the software development lifecycle and test cases are typically associated with the latter part of the lifecycle. By leveraging use cases to generate test cases, however, testing teams can get started much earlier in the lifecycle, allowing them to identify and repair defects that would be very costly to fix later, ship on time, and ensure that the system will work reliably. Using the clearly-defined methodology I've outlined above for generating test cases, developers can simplify the testing process, increase efficiency, and help ensure complete test coverage. For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you! Copyright Rational Software 2001 Privacy/Legal Information

59 The Three Phases of Life by Joe Marasco Senior Vice President Rational Software One of life s great fascinations is watching people evolve over time. Some people grow and develop, while others seem to be stuck in patterns that limit their happiness and well-being. Others excel in certain areas of their lives while failing miserably in others. A small few are spectacularly successful by conventional measures yet are perpetually dissatisfied. Is there a simple model we can use to make sense of these observations? Many years of watching and thinking have led me to believe that we can further our understanding by simplifying the problem. The model we use consists of three fundamental states, characterized by the Yiddish words schlepper, macher, and mensch. First I will describe the states, and how people move from one state to the next. Then I will explain how people can get stuck in one of the earlier states, and how to address that failure mode. In addition, I ll talk about people in different states in different parts of their lives at a given time. Finally, I ll address the issue of the distribution of the population in the various states, and the implications for getting along in the real world. I want to be a little precise with words here. I call the three states "phases," because I believe that there is a natural progression that is accessible to all people. The phases become available as people grow, mature, and come to terms with the real world, learning how to make appropriate compromises between their belief systems and the exigencies of everyday life. Unfortunately, sometimes people get "stuck" in a phase and don t move on. That leads to thinking of them as a "class" of people. But the word class is overloaded with lots of other implications, social and otherwise. Hence I avoid the use of that term throughout the article.

60 Why is this important? We have a tendency to believe that life is complex, and there is a wealth of academic research on the interactions of social groups in many different contexts: family, business, teams, and so on. Most of it is inaccessible to the average person. What I have come to believe is that this very simple model explains a wide variety of real world data and has predictive power. A simple model that people can understand and apply and that works eighty percent of the time is more useful than a complex and hard to use model that works ninety-five percent of the time. Schlepper Let us begin with the first state. People in this phase are collectively known as "schleppers." This term comes from the Yiddish verb "schlep," which means "to drag." Colloquially, it also means to carry something around, as in "schlepping those bags through the airport." In most common parlance, a "schlepper" is thought of as a lazy, sloppy person, but this is not the connotation that I wish to apply here. For me, a schlepper is someone who is in the first stage of his or her development. Literally, a schlepper is a carrier. In the good old days, a perfect example of a schlepper was a caddie, a kid who carried golf bags. You are not doing a lot of heavy thinking when you are schlepping; you are performing useful but perhaps menial labor, usually in the service of someone else. Schlepping is not very glorious, but nonetheless one should not underestimate its importance. First of all, just because you are schlepping does not mean you are forbidden to think. In fact, just the opposite is true: because the work content of schlepping includes little thinking, you can use this time to think and learn while you schlep. Many creative ideas occur during schlepping. For instance, how can I schlep this stuff with less effort? One of the very first caveman (or perhaps I should say "caveperson") schleppers invented the wheel as a result. The act of routinely repeating a boring, uninteresting task, or having to expend what seems like an inordinate amount of labor to achieve a mundane goal, often causes even the dullest schlepper to have an idea -- necessity (made most obvious by pain or fatigue) being the mother of invention. My experience is that people who have schlepped often see new and interesting ways to avoid schlepping, even when the schlepping is associated with a new domain. They develop instincts for when something is going to turn into a big schlep, and head off that eventuality at the pass. Ex-schleppers make great engineers, for example. In general, we all need to schlep. It builds character, as trite as that may sound. It teaches us humility. Humility of the sort "If I don t get smarter about this, I m going to have to schlep the rest of my life." There are some interesting aspects of this phenomenon. Schleppers quickly perceive the great injustice of life. Here you are, young, smart, good looking, and so on, and you have to schlep for some old, fat, dull idiot who just happens to be your boss. How did that happen?

61 Sometimes these bosses can be downright stupid, to the point of making you schlep more than you should have to. Other times, they can increase your grief through deliberate cruelty. And because you are the designated schlepper, you have two choices: schlep in silence, or go schlep somewhere else. The third option, making a ruckus, is usually counterproductive, as schleppers are basically interchangeable by definition, and noisy ones are quickly replaced. Some amazing truths reveal themselves to observant schleppers. For example, schlepping in silence causes erosion of the stomach lining, so the learning schlepper will attempt to deal creatively with his work or social situation in such a way as to minimize grief. Quitting and schlepping somewhere else (option two, above) is most often found to not be a solution at all, for just as all schleppers are interchangeable, all schlepping jobs are basically the same. Most of the time, it s out of the frying pan and into the towering inferno. Skipping over the schlepper phase is dangerous, even if you could do it. Actually, some people do -- those who are born rich. They never get to experience the joys of schlepping -- for instance, the joy of creative schlepping, or the pride one takes in a load well-schlepped. As a result, they never understand what most of the world is going through. They take too much for granted and are not well grounded in reality. And, it is almost impossible to become a schlepper later in life if you never were one to start with. But more important, you miss out on important lessons -- humility, the value of a dollar earned through a hard day s work, the intrinsic unfairness of the world, and how screwed up things are down in the trenches. The other irreplaceable lesson comes through contact with the enormous variety of people the real world presents the schlepper -- the gonifs, 1 the liars, the cheats, and what used to be called in less politically correct times, "the common people." Most important, there are those wonderful others who see something special in you and say to themselves "Why is this kid schlepping? Surely he can do more," and then act on it. They become our mentors, coaches, and champions, and that is one of the ways we move beyond the schlepper phase. Sooner or later, every schlepper must come to understand that in order to make progress, you have to move beyond the schlepper phase. This involves investment. You can schlep forever and blame it on the evils of the class system, or free-market capitalism, or whatever, but the system is there. To stop schlepping, you have to be able to do something that gets someone to say, "Hey, I m not paying you to schlep that stuff, get someone else to do it!" Often this takes the form of actually making the effort to get more education or training, thinking, or doing something that makes you stand out in a positive way. It requires, in Churchill s words, blood, sweat, tears, and toil. You must show that you can add value at the next level. This is a two-part proposition. First, you have to get the training, acquire the skills, get the result, do the deed. Then, you have to get someone influential to recognize that something has changed, and that you are ready to graduate from the schlepper phase. These are the mentors described above.

62 So, we all start out as schleppers. Kids are the schleppers in every family. Think of being a schlepper as being an apprentice. Kids are apprentice adults. If they are watchful, can avoid getting killed, and listen from time to time, they can graduate to adulthood. If not, they remain kids forever. Resign yourself that in everything you do -- every new job, every new sport, every new relationship -- you start out as a schlepper. How long you remain one is up to you. And remember, while you are a schlepper, to maintain your dignity. Macher The second phase of life is that of the macher. I believe the origin of "macher" is related to the verb "to do" or "to make." Phase two is the longest, and in some ways, the most enjoyable phase of life. A macher is someone who gets things done, who makes things happen, who gets results. When you are a macher, you are "putting points on the board." This phase is incredibly productive, and most machers get a real sense of satisfaction from doing what they do. Some machers enjoy it so much that they stay machers forever -- and this is not a totally bad thing. If it weren t for the machers of the world, we d all still be schlepping. Machers are not just the inventors, the entrepreneurs, the craftsmen, and the geniuses - although those folks generally are machers. What distinguishes a macher is that he or she adds value and makes a difference. Being a macher is usually equated with high performance, not the ordinary or mundane. Those who put in their eight hours and don't mess up too often aren t machers; they re sort of advanced schleppers. No, to be a macher, you have to be in that category that is often characterized by the exclamation "We need a real macher to fix this!" In many firms, machers are the "rainmakers," the folks who generate business. The litmus test is this -- if you take away the macher, the organization not only suffers greatly, it s just not the same. Machers have the following interesting characteristics. They are usually very focused, to the point of being driven. They are intense. They are results-oriented. They understand the goal and can get it in the crosshairs. It is usually a bad thing to get between a macher and the macher s desired result. Machers are charismatic, in both the good and bad sense. It is unusual for a macher to not be charismatic, because this trait is so often linked with leadership. There are exceptions, but not enough of them to warrant more space here. There is a dark side. Machers will err on the side of believing that the end justifies the means, because, to them, it does. They can be absolutely ruthless. People who are squeamish about hurting other people s feelings will often employ machers, who have no such compunctions. The macher has no illusions about what he s getting paid for -- it s to get a result. But, if the truth be known, the macher would almost always do it for free --

63 achievement is a very potent drug. Machers can be self-limiting. The really good machers discover early in their careers that you have to be careful about breaking too much glass. Annoy enough people and you won t be able to get others to help you -- even other machers! There are a lot of obnoxious young machers, but very few obnoxious old machers; the reason is obvious. It s hard for machers to progress if they can t build groups, consisting, incidentally, of other machers. The scope of the problems they are asked to solve increases, and gets to the point where fielding a team is the only answer. If the macher is incapable of developing the interpersonal skills necessary to get others to play, he will eventually wind up isolated and be overtaken by even more clever machers. Machers enjoy a side benefit that is not insignificant. To some extent, they can be prima donnas and make their own rules. Why? Because many people and organizations will tolerate some pretty outrageous behavior if the problem to be solved is serious enough or the gain is big enough. So the macher can avoid much of the petty tyranny of organizations and bureaucracies by explicitly placing himself outside the normal system. Many machers choose this path simply because this is the only way they can function, by setting up a context in which they can get the job done by their rules. In any other context, they will fail because they have to obey constraints that they judge to be too onerous. But, live by the sword, die by the sword. When a macher fails, there is never an insufficiency of people waiting to bury him -- his enemies tend to accumulate and have long memories. To survive outside the system, you have to be really good and have real integrity. If you don t, your first mistake will be your last. Sometimes machers can become intoxicated by the power they wield and can really get out of control. In the end, an overly aggressive macher will self-destruct, but not before creating a pretty big mess. Machers rarely fade away quietly; rather, they go out in a blaze of fireworks. Hubris just catches up, and since machers do everything on a grand scale -- they do have vision -- they generally fail spectacularly. Can you be a macher without having been a schlepper? Yes, but it is rare. Machers who have not served some kind of apprenticeship usually have a piece missing. It is tough to be a macher if you are not grounded in reality, and schlepping is the quintessential training ground in reality. Machers tend to stay in the macher phase because they are an elite. They enjoy lots of tangible and intangible rewards in the business world in exchange for the results they achieve for their organizations. They are constantly being recruited for bigger and better challenges. It s a great life, and the risks are few -- organizational backlash from time to time, and perhaps a premature coronary from excessive Type-A behavior. But most machers can deal with it. In other areas of life, being a macher means being competent; actually, it means performing at the highest level of competence. There s a tendency to aspire to be a macher in all parts of one s life. Once one has become a macher in one part, it can be frustrating, as competency can be highly domain-specific. Ergo, many machers become one-dimensional, focusing

64 their energies in their area of dominance. Since they tend to be competitive by nature, this is a natural stalling-out point for them. Once you are better than most of your peers, what is there left to strive for? As exalted as machers are, there is a higher state. The Yiddish word for it - - mensch -- is pretty much untranslatable into English. Mensch A mensch is a gentleman, a "fine person." But that doesn t quite capture the feeling of "He s a real mensch!" The essence of being a mensch is to have a global perspective, to be somewhat introspective and philosophical, and to be kind. A mensch is good at listening and very good at seeing the other person s point of view. We should remark here that the very word "mensch" means "man" in German. But remember, we are using the Yiddish meaning in this article, and so can assert that this phase of life is available to both sexes. There s a big difference between machers and mensches. First, machers usually have a very hard edge to them; mensches are mellower, softer, and more patient. Machers have a sense of urgency; mensches have a sense of inevitability. The mensch really believes that it all comes out in the wash. The schlepper is often viewed as dull or stupid, when in fact all he may be guilty of is ignorance; the macher is viewed as being smart or clever; the mensch is always viewed as being wise. You go to the macher when you want a problem solved now; you go to the mensch when you are looking for a long-term solution. In some sense, the schlepper can t do anything, the macher is the tactician par excellence, and the mensch is the strategist. Before I let you think that the mensch is just a Yiddish incarnation of Yoda, I should point out that the mensch is not just a dispenser of advice, but also a doer of deeds. The thing that sets the mensch apart is that he not only knows the right thing to do, but he acts on it, even at great personal cost. Unlike the macher, the mensch is not at all interested in getting the credit for the result. He is vitally interested in the result for its own sake, and doesn t really care if anyone ever knows he was the facilitator. A typical mensch-like thing to do is to make a large, anonymous donation to charity, for example. Machers sometimes make good mentors, but only as an almost accidental side effect of their primary objective, which is getting results. Machers more often mentor more junior machers, as opposed to schleppers. Mensches, on the other hand, make superb coaches and mentors, because they are so highly attuned to the needs of others; they help everyone because they empathize with everyone. They also have a quintessential long-term perspective, so they understand the leverage of developing others and building infrastructure. They understand the Zen-like beauty of injecting energy into the system, unaware of when or where the positive consequences of that act will appear -- yet confident that it certainly will. The mensch also provides a lot of lubrication in any organization. He s

65 above the fray, committed to the organization and its goals, but without a personal agenda, unlike the macher, who always has one. The macher is territorial, whereas the mensch is extraterritorial. The mensch will endeavor to be a peacemaker, a mediator, and someone who is creative in trying to find a solution when there appears to be none. Appearances notwithstanding, the mensch is a highly effective person. His strength comes from his ability to work well with everyone, and the respect everyone has for him. Can you become a mensch without having been a macher? There are two points of view. The first point of view is that the schlepper-to-mensch transition is sort of like going from apprentice to master craftsman without ever having been a competent journeyman in between. In this point of view, the wisdom the mensch exhibits is accumulated from years of being a macher; the really good machers age well and eventually become mensches. The problem with this point of view is that there seem to be some clear exceptions. Just as we have noted that many machers never graduate to menschhood, it is also the case that we find a few people displaying the characteristics of mensches who have not been machers. They have schlepped for extended periods of time but have not become bitter. They have accumulated wisdom, are kind, and are secure in themselves. They universally understand people and the human drama, and exhibit lots of empathy. Their judgment is impeccable. The mystery is where their wisdom came from. More on Mensches The Swiss physicist and ecologist Olivier Guisan told me twenty-five years ago that the key to growing up was to have one s eyes opened without having one s heart hardened. A maturing process that enables us to cope with the sometimes daunting realities of life, without becoming cynical, is essential. The schlepper is typically a pessimist, the macher a cynic. The mensch is an optimist. He believes in the goodness of people and in civilization s ability to find solutions to complex problems. His own humanity is of course part of this, but he ignores that. The noted psychologist Mihaly Csikszentmihalyi 2 has described a model in his book Flow: The Psychology of Optimal Experience 3. In this theory, there is a tension between knowledge and skill set versus the task worked on. If the task is too easy, boredom sets in, and people are unhappy. If the task is -- relatively speaking -- very challenging compared to competence, then people are stretched, but tense and anxious as a result. When there is a reasonable match -- not too easy, not too hard -- then a "flow state" is achieved. Csikszentmihalyi calls the achievement of the flow state the "flow channel," because it spans a broad range of competency and task difficulty. Flow is a state of grace, where achievement is high and one experiences a feeling of incredible well-being; athletes call it "being in the zone." What is interesting is that in this model, schleppers would appear to be unhappy because they are constantly below the flow channel, working on tasks that they find boring. Machers, it would appear, are

66 troubled because they are most frequently working above the flow channel -- they are characteristically "in over their heads." And, mensches, by my reckoning, are happy and effective because they are so often in the flow channel. If achieving flow is a key, then mensches would seem to have discovered it. Surprisingly, you don t have to be old to be a mensch, although many of the traits associated with mensches can come with age. No, being a mensch is a state of mind, available to all of us with the proper perspective and attitude. Mensches are happy people. They are surrounded by happy people. They can deal with life s worst surprises and help others to do so, too. They have extremely well integrated and balanced lives, and they are at peace. Population Distribution For every hundred schleppers in the world, there are ten machers, and one mensch. Why are there so many schleppers? The easy answer is to steal from Lincoln and say, "God must have loved them, because he made so many." But even so, one would think that frustration would cause almost everyone to graduate sooner or later. Alas, it is not so. First, laziness plays a big part: Many people are just not willing to do what it takes to move up. Second, it requires maturity: There is an attitude adjustment that is required to graduate - you need to take responsibility for your own destiny. It is easier to complain about the system and your inability to advance than it is to take matters into your hands and succeed in spite of obstacles. Finally, there is a commitment to continue to grow. Moving beyond the schlepper zone is a fundamental change, and it scares many people, because it implies a new way of life that is bereft of the simpler comforts that the schlepper enjoys. Because the "no gain" comes with "no pain," many schleppers can never quite get over the emotional barrier it takes to graduate. I think these three factors -- sacrifice, maturity, and fundamental life change -- explain why there are so many schleppers out there. All this exists in the context of a real, sometimes harsh, external world. In my experience, intelligence and talent play much less a part in graduation than do hard work and a determined attitude. In today s global economy, I believe that the opportunity is there, that there are no insurmountable cultural, social, or other barriers. If you allow yourself to believe that external factors rule, you will consign yourself to the role of a schlepper. You can prevail over others who block the path, but no one can lift you over a barrier that you construct for yourself. That there are only ten machers for every hundred schleppers is the greatest waste of human capital that I can imagine. It is a situation that I find untenable as we move deeper and deeper into the information economy. The schlepper jobs are going away, but the attitudes that have allowed them to persist for so long are not. For those who graduate, a relatively short period of their lives is spent

67 schlepping. If you are in this category, most of your life will be spent as a macher, so try to be a good one. If machers could look at this period of their lives as apprentice mensches, we might all be a little better off. I don t think it would make them much less effective, and, in the long run, we d all live longer and be happier. But it s tough to alter the macher s behavior, because he believes his effectiveness is tied to all the characteristics that distinguish him from the mensch. It s a puzzle. I worry that my estimate of one mensch for every ten machers may be higher than the actual ratio. The world needs more mensches, as they seem to be in constantly short supply. In too many cases, their period of menschhood is short, as their spirit is more durable than the body that contains it. Summary The model makes certain assumptions. You start out as a schlepper, grow to be a macher, and hope to become a mensch. That is the usual progression, with the exceptions noted above. Even though the model is simple, it is not perfectly neat; anytime we deal with generalizations about people, we will have "messy" exceptions to deal with. The problem is that while I can tell you what you need to do to become a macher, I can t give you a recipe for becoming a mensch. You can t become a mensch through hard work, the way you can become a macher. It may be that mensches are born, not made. Asking how to become a mensch is a little like asking how to become wise, or how to become enlightened. It helps to have come under the influence of a mensch or two, especially early in life when they can serve as examples. Growing up with a macher for a father and a mensch for a grandfather -- and seeing how their styles played against each other -- could be very enlightening, if the schlepper child were especially aware. Another key idea: understanding that the mensches of the world want nothing in return for their kindness, but that you pass it on to the next generation. But what do I know? Dedicated to Roslyn Rosenthal Marasco, Footnotes 1 A gonif is a common thief. 2 Pronounced "chik-sent-mee-hi." 3 Mihaly Csikszentmihalyi, Flow: The Psychology of Optimal Experience. Harper Collins, 1991 (ISBN ).

68 For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you! Copyright Rational Software 2001 Privacy/Legal Information

69 The Rational Reader by Joe Marasco Senior Vice President Rational Software Why introduce a book review section in The Rational Edge, an "e-zine"? The answer for me is pretty simple: Books remain an important source of continuing education. There are more books than there is time, and the signal-to-noise ratio is not great. Intelligent readers can use good book reviews as a front-end filter. With respect to the first point, I don't believe books are dead by any means. Most of what writers post online is no longer than essay length because they know the habits of online readers, who simply won't sit still for longer periods with a single, booklength text on the computer screen. A book doesn't necessarily target a different person, but it definitely offers a different reading experience, a way to relax and concentrate at the same time. With a book, the author gets a chance to stretch out and flex his or her expository muscles. For the reader, books offer a chance to get beyond superficial renderings, to explore subjects in more detail. Books represent a substantial investment, in money and time, for both the author and the reader. A great book is a wonderfully rewarding experience and a bad book a greatly disappointing one. And that brings us to the second point. More books are being published per unit time than we can possibly read. Gresham's Law 1 applied in this context says that bad books drive good books out of circulation. I set an objective for myself to read twenty-five books a year, which comes out to two a month. This is about what typical professionals who are very engaged in their jobs can reasonably handle. So I get twenty-five "shots" a year at learning something, and I don't want to waste any of them on second-rate material. You shouldn't either. That relates to point three: a good book review can help you in many ways.

Model Driven Architecture Targets Middleware Interoperability Challenges

Model Driven Architecture Targets Middleware Interoperability Challenges Model Driven Architecture Targets Middleware Interoperability Challenges by Richard Soley Chairman and Chief Executive Officer Object Management Group and the OMG Staff Strategy Group "CORBA was a powerful

More information

Model Driven Architecture

Model Driven Architecture Model Driven Architecture by Richard Soley and the OMG Staff Strategy Group Object Management Group White Paper Draft 3.2 November 27, 2000 Preface: OMG s Accomplishments It s about integration. It s about

More information

Developing in OMG s Model-Driven Architecture

Developing in OMG s Model-Driven Architecture Developing in OMG s Model-Driven Architecture Jon Siegel and the OMG Staff Strategy Group Object Management Group White Paper November, 2001 Revision 2.6 In an accompanying white paper 1, the Object Management

More information

Computation Independent Model (CIM): Platform Independent Model (PIM): Platform Specific Model (PSM): Implementation Specific Model (ISM):

Computation Independent Model (CIM): Platform Independent Model (PIM): Platform Specific Model (PSM): Implementation Specific Model (ISM): viii Preface The software industry has evolved to tackle new approaches aligned with the Internet, object-orientation, distributed components and new platforms. However, the majority of the large information

More information

ASSURING DATA INTEROPERABILITY THROUGH THE USE OF FORMAL MODELS OF VISA PAYMENT MESSAGES (Category: Practice-Oriented Paper)

ASSURING DATA INTEROPERABILITY THROUGH THE USE OF FORMAL MODELS OF VISA PAYMENT MESSAGES (Category: Practice-Oriented Paper) ASSURING DATA INTEROPERABILITY THROUGH THE USE OF FORMAL MODELS OF VISA PAYMENT MESSAGES (Category: Practice-Oriented Paper) Joseph Bugajski Visa International JBugajsk@visa.com Philippe De Smedt Visa

More information

Model Driven Architecture and Rhapsody

Model Driven Architecture and Rhapsody Model Driven Architecture and Rhapsody Dr. Bruce Powel Douglass Chief Evangelist Telelogic Model Driven Architecture and Rhapsody Abstract MDA, short for Model Driven Architecture, is a unification by

More information

Executive Summary. Round Trip Engineering of Space Systems. Change Log. Executive Summary. Visas

Executive Summary. Round Trip Engineering of Space Systems. Change Log. Executive Summary. Visas Reference: egos-stu-rts-rp-1002 Page 1/7 Authors: Andrey Sadovykh (SOFTEAM) Contributors: Tom Ritter, Andreas Hoffmann, Jürgen Großmann (FHG), Alexander Vankov, Oleg Estekhin (GTI6) Visas Surname - Name

More information

Virtualization. Q&A with an industry leader. Virtualization is rapidly becoming a fact of life for agency executives,

Virtualization. Q&A with an industry leader. Virtualization is rapidly becoming a fact of life for agency executives, Virtualization Q&A with an industry leader Virtualization is rapidly becoming a fact of life for agency executives, as the basis for data center consolidation and cloud computing and, increasingly, as

More information

Migration to Service Oriented Architecture Using Web Services Whitepaper

Migration to Service Oriented Architecture Using Web Services Whitepaper WHITE PAPER Migration to Service Oriented Architecture Using Web Services Whitepaper Copyright 2004-2006, HCL Technologies Limited All Rights Reserved. cross platform GUI for web services Table of Contents

More information

10 Steps to Building an Architecture for Space Surveillance Projects. Eric A. Barnhart, M.S.

10 Steps to Building an Architecture for Space Surveillance Projects. Eric A. Barnhart, M.S. 10 Steps to Building an Architecture for Space Surveillance Projects Eric A. Barnhart, M.S. Eric.Barnhart@harris.com Howard D. Gans, Ph.D. Howard.Gans@harris.com Harris Corporation, Space and Intelligence

More information

How to Harvest Reusable Components in Existing Software. Nikolai Mansurov Chief Scientist & Architect

How to Harvest Reusable Components in Existing Software. Nikolai Mansurov Chief Scientist & Architect How to Harvest Reusable Components in Existing Software Nikolai Mansurov Chief Scientist & Architect Overview Introduction Reuse, Architecture and MDA Option Analysis for Reengineering (OAR) Architecture

More information

Symantec Data Center Transformation

Symantec Data Center Transformation Symantec Data Center Transformation A holistic framework for IT evolution As enterprises become increasingly dependent on information technology, the complexity, cost, and performance of IT environments

More information

BUILDING the VIRtUAL enterprise

BUILDING the VIRtUAL enterprise BUILDING the VIRTUAL ENTERPRISE A Red Hat WHITEPAPER www.redhat.com As an IT shop or business owner, your ability to meet the fluctuating needs of your business while balancing changing priorities, schedules,

More information

Rational Software White paper

Rational Software White paper Unifying Enterprise Development Teams with the UML Grady Booch Rational Software White paper 1 There is a fundamental paradox at play in contemporary software development. On the one hand, organizations

More information

Accelerate Your Enterprise Private Cloud Initiative

Accelerate Your Enterprise Private Cloud Initiative Cisco Cloud Comprehensive, enterprise cloud enablement services help you realize a secure, agile, and highly automated infrastructure-as-a-service (IaaS) environment for cost-effective, rapid IT service

More information

Model Driven Architecture - The Vision

Model Driven Architecture - The Vision Model Driven Architecture - The Vision Marko Fabiunke Fraunhofer Institut für Rechnerarchitektur und Softwaretechnik marko.fabiunke@first.fraunhofer.de The Fraunhofer FIRST Institut Your partner We support

More information

Appendix A - Glossary(of OO software term s)

Appendix A - Glossary(of OO software term s) Appendix A - Glossary(of OO software term s) Abstract Class A class that does not supply an implementation for its entire interface, and so consequently, cannot be instantiated. ActiveX Microsoft s component

More information

Enterprise Data Architecture: Why, What and How

Enterprise Data Architecture: Why, What and How Tutorials, G. James, T. Friedman Research Note 3 February 2003 Enterprise Data Architecture: Why, What and How The goal of data architecture is to introduce structure, control and consistency to the fragmented

More information

Implementing ITIL v3 Service Lifecycle

Implementing ITIL v3 Service Lifecycle Implementing ITIL v3 Lifecycle WHITE PAPER introduction GSS INFOTECH IT services have become an integral means for conducting business for all sizes of businesses, private and public organizations, educational

More information

Best Practices for Deploying Web Services via Integration

Best Practices for Deploying Web Services via Integration Tactical Guidelines, M. Pezzini Research Note 23 September 2002 Best Practices for Deploying Web Services via Integration Web services can assemble application logic into coarsegrained business services.

More information

TOPLink for WebLogic. Whitepaper. The Challenge: The Solution:

TOPLink for WebLogic. Whitepaper. The Challenge: The Solution: Whitepaper The Challenge: Enterprise JavaBeans (EJB) represents a new standard in enterprise computing: a component-based architecture for developing and deploying distributed object-oriented applications

More information

Building a New Rational Web Site with Rational Suite

Building a New Rational Web Site with Rational Suite Building a New Rational Web Site with Rational Suite by Christina Howe Director of Internet Services Rational Software In April of last year, Rational Software determined that its Web site no longer measured

More information

TUTORIAL: WHITE PAPER. VERITAS Indepth for the J2EE Platform PERFORMANCE MANAGEMENT FOR J2EE APPLICATIONS

TUTORIAL: WHITE PAPER. VERITAS Indepth for the J2EE Platform PERFORMANCE MANAGEMENT FOR J2EE APPLICATIONS TUTORIAL: WHITE PAPER VERITAS Indepth for the J2EE Platform PERFORMANCE MANAGEMENT FOR J2EE APPLICATIONS 1 1. Introduction The Critical Mid-Tier... 3 2. Performance Challenges of J2EE Applications... 3

More information

Grow Your Services Business

Grow Your Services Business Grow Your Services Business Cisco Services Channel Program One Experience. Expanding Opportunities. Expand Your Services Practice More Profitably Together with Cisco Our customers face tough business

More information

2 The IBM Data Governance Unified Process

2 The IBM Data Governance Unified Process 2 The IBM Data Governance Unified Process The benefits of a commitment to a comprehensive enterprise Data Governance initiative are many and varied, and so are the challenges to achieving strong Data Governance.

More information

WHITE PAPER Application Performance Management. The Case for Adaptive Instrumentation in J2EE Environments

WHITE PAPER Application Performance Management. The Case for Adaptive Instrumentation in J2EE Environments WHITE PAPER Application Performance Management The Case for Adaptive Instrumentation in J2EE Environments Why Adaptive Instrumentation?... 3 Discovering Performance Problems... 3 The adaptive approach...

More information

METADATA INTERCHANGE IN SERVICE BASED ARCHITECTURE

METADATA INTERCHANGE IN SERVICE BASED ARCHITECTURE UDC:681.324 Review paper METADATA INTERCHANGE IN SERVICE BASED ARCHITECTURE Alma Butkovi Tomac Nagravision Kudelski group, Cheseaux / Lausanne alma.butkovictomac@nagra.com Dražen Tomac Cambridge Technology

More information

Integration With the Business Modeler

Integration With the Business Modeler Decision Framework, J. Duggan Research Note 11 September 2003 Evaluating OOA&D Functionality Criteria Looking at nine criteria will help you evaluate the functionality of object-oriented analysis and design

More information

Networking for a smarter data center: Getting it right

Networking for a smarter data center: Getting it right IBM Global Technology Services October 2011 Networking for a smarter data center: Getting it right Planning the network needed for a dynamic infrastructure 2 Networking for a smarter data center: Getting

More information

ebook library PAGE 1 HOW TO OPTIMIZE TRANSLATIONS AND ACCELERATE TIME TO MARKET

ebook library PAGE 1 HOW TO OPTIMIZE TRANSLATIONS AND ACCELERATE TIME TO MARKET ebook library PAGE 1 HOW TO OPTIMIZE TRANSLATIONS AND ACCELERATE TIME TO MARKET Aligning people, process and technology to improve quality and speed to market To succeed in the global business arena, companies

More information

Contents. viii. List of figures. List of tables. OGC s foreword. 3 The ITIL Service Management Lifecycle core of practice 17

Contents. viii. List of figures. List of tables. OGC s foreword. 3 The ITIL Service Management Lifecycle core of practice 17 iii Contents List of figures List of tables OGC s foreword Chief Architect s foreword Preface vi viii ix x xi 2.7 ITIL conformance or compliance practice adaptation 13 2.8 Getting started Service Lifecycle

More information

Networking for a dynamic infrastructure: getting it right.

Networking for a dynamic infrastructure: getting it right. IBM Global Technology Services Networking for a dynamic infrastructure: getting it right. A guide for realizing the full potential of virtualization June 2009 Executive summary June 2009 Networking for

More information

Software Engineering

Software Engineering Software Engineering chap 4. Software Reuse 1 SuJin Choi, PhD. Sogang University Email: sujinchoi@sogang.ac.kr Slides modified, based on original slides by Ian Sommerville (Software Engineering 10 th Edition)

More information

Minsoo Ryu. College of Information and Communications Hanyang University.

Minsoo Ryu. College of Information and Communications Hanyang University. Software Reuse and Component-Based Software Engineering Minsoo Ryu College of Information and Communications Hanyang University msryu@hanyang.ac.kr Software Reuse Contents Components CBSE (Component-Based

More information

Real-time & Embedded Systems Workshop July 2007 Building Successful Real-time Distributed Systems in Java

Real-time & Embedded Systems Workshop July 2007 Building Successful Real-time Distributed Systems in Java Real-time & Embedded Systems Workshop July 2007 Building Successful Real-time Distributed Systems in Java Andrew Foster Product Manager PrismTech Corporation The Case for Java in Enterprise Real-Time Systems

More information

Cloud Computing: Making the Right Choice for Your Organization

Cloud Computing: Making the Right Choice for Your Organization Cloud Computing: Making the Right Choice for Your Organization A decade ago, cloud computing was on the leading edge. Now, 95 percent of businesses use cloud technology, and Gartner says that by 2020,

More information

SYSPRO s Fluid Interface Design

SYSPRO s Fluid Interface Design SYSPRO s Fluid Interface Design Introduction The world of computer-user interaction has come a long way since the beginning of the Graphical User Interface, but still most application interfaces are not

More information

developer.* The Independent Magazine for Software Professionals

developer.* The Independent Magazine for Software Professionals developer.* The Independent Magazine for Software Professionals Improving Developer Productivity With Domain-Specific Modeling Languages by Steven Kelly, PhD According to Software Productivity Research,

More information

Response to the. ESMA Consultation Paper:

Response to the. ESMA Consultation Paper: Response to the ESMA Consultation Paper: Draft technical standards on access to data and aggregation and comparison of data across TR under Article 81 of EMIR Delivered to ESMA by Tahoe Blue Ltd January

More information

FREQUENTLY ASKED QUESTIONS

FREQUENTLY ASKED QUESTIONS Borland Together FREQUENTLY ASKED QUESTIONS GENERAL QUESTIONS What is Borland Together? Borland Together is a visual modeling platform that enables software teams to consistently deliver on-time, high

More information

How Cisco IT Improved Development Processes with a New Operating Model

How Cisco IT Improved Development Processes with a New Operating Model How Cisco IT Improved Development Processes with a New Operating Model New way to manage IT investments supports innovation, improved architecture, and stronger process standards for Cisco IT By Patrick

More information

Architecting the High Performance Storage Network

Architecting the High Performance Storage Network Architecting the High Performance Storage Network Jim Metzler Ashton, Metzler & Associates Table of Contents 1.0 Executive Summary...3 3.0 SAN Architectural Principals...5 4.0 The Current Best Practices

More information

DRS Policy Guide. Management of DRS operations is the responsibility of staff in Library Technology Services (LTS).

DRS Policy Guide. Management of DRS operations is the responsibility of staff in Library Technology Services (LTS). Harvard University Library Office for Information Systems DRS Policy Guide This Guide defines the policies associated with the Harvard Library Digital Repository Service (DRS) and is intended for Harvard

More information

Connecting ESRI to Anything: EAI Solutions

Connecting ESRI to Anything: EAI Solutions Connecting ESRI to Anything: EAI Solutions Frank Weiss P.E., ESRI User s Conference 2002 Agenda Introduction What is EAI? Industry trends Key integration issues Point-to-point interfaces vs. Middleware

More information

Oracle Tuxedo. CORBA Technical Articles 11g Release 1 ( ) March 2010

Oracle Tuxedo. CORBA Technical Articles 11g Release 1 ( ) March 2010 Oracle Tuxedo CORBA Technical Articles 11g Release 1 (11.1.1.1.0) March 2010 Oracle Tuxedo CORBA Technical Articles, 11g Release 1 (11.1.1.1.0) Copyright 1996, 2010, Oracle and/or its affiliates. All rights

More information

How to choose the right Data Governance resources. by First San Francisco Partners

How to choose the right Data Governance resources. by First San Francisco Partners How to choose the right Data Governance resources by First San Francisco Partners 2 Your organization is unique. It has its own strengths, opportunities, products, services and customer base. Your culture

More information

Application Oriented Networks: An SOA Perspective

Application Oriented Networks: An SOA Perspective Oriented s: An SOA Perspective www.thbs.com Introduction Service Oriented Architecture is the hot topic of discussion in IT circles today. So much so, in fact, that SOA is being seen by many as the future

More information

OMG Specifications for Enterprise Interoperability

OMG Specifications for Enterprise Interoperability OMG Specifications for Enterprise Interoperability Brian Elvesæter* Arne-Jørgen Berre* *SINTEF ICT, P. O. Box 124 Blindern, N-0314 Oslo, Norway brian.elvesater@sintef.no arne.j.berre@sintef.no ABSTRACT:

More information

The Top Five Reasons to Deploy Software-Defined Networks and Network Functions Virtualization

The Top Five Reasons to Deploy Software-Defined Networks and Network Functions Virtualization The Top Five Reasons to Deploy Software-Defined Networks and Network Functions Virtualization May 2014 Prepared by: Zeus Kerravala The Top Five Reasons to Deploy Software-Defined Networks and Network Functions

More information

DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI

DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI Department of Computer Science and Engineering IT6801 - SERVICE ORIENTED ARCHITECTURE Anna University 2 & 16 Mark Questions & Answers Year / Semester: IV /

More information

Component-based Architecture Buy, don t build Fred Broks

Component-based Architecture Buy, don t build Fred Broks Component-based Architecture Buy, don t build Fred Broks 1. Why use components?... 2 2. What are software components?... 3 3. Component-based Systems: A Reality!! [SEI reference]... 4 4. Major elements

More information

The Transition to Networked Storage

The Transition to Networked Storage The Transition to Networked Storage Jim Metzler Ashton, Metzler & Associates Table of Contents 1.0 Executive Summary... 3 2.0 The Emergence of the Storage Area Network... 3 3.0 The Link Between Business

More information

WHAT IS SOFTWARE ARCHITECTURE?

WHAT IS SOFTWARE ARCHITECTURE? WHAT IS SOFTWARE ARCHITECTURE? Chapter Outline What Software Architecture Is and What It Isn t Architectural Structures and Views Architectural Patterns What Makes a Good Architecture? Summary 1 What is

More information

Data Virtualization Implementation Methodology and Best Practices

Data Virtualization Implementation Methodology and Best Practices White Paper Data Virtualization Implementation Methodology and Best Practices INTRODUCTION Cisco s proven Data Virtualization Implementation Methodology and Best Practices is compiled from our successful

More information

Developing Software Applications Using Middleware Infrastructure: Role Based and Coordination Component Framework Approach

Developing Software Applications Using Middleware Infrastructure: Role Based and Coordination Component Framework Approach Developing Software Applications Using Middleware Infrastructure: Role Based and Coordination Component Framework Approach Ninat Wanapan and Somnuk Keretho Department of Computer Engineering, Kasetsart

More information

WHITE PAPER: ENTERPRISE AVAILABILITY. Introduction to Adaptive Instrumentation with Symantec Indepth for J2EE Application Performance Management

WHITE PAPER: ENTERPRISE AVAILABILITY. Introduction to Adaptive Instrumentation with Symantec Indepth for J2EE Application Performance Management WHITE PAPER: ENTERPRISE AVAILABILITY Introduction to Adaptive Instrumentation with Symantec Indepth for J2EE Application Performance Management White Paper: Enterprise Availability Introduction to Adaptive

More information

RED HAT ENTERPRISE LINUX. STANDARDIZE & SAVE.

RED HAT ENTERPRISE LINUX. STANDARDIZE & SAVE. RED HAT ENTERPRISE LINUX. STANDARDIZE & SAVE. Is putting Contact us INTRODUCTION You know the headaches of managing an infrastructure that is stretched to its limit. Too little staff. Too many users. Not

More information

Introduction. Chapter 1. What Is Visual Modeling? The Triangle for Success. The Role of Notation. History of the UML. The Role of Process

Introduction. Chapter 1. What Is Visual Modeling? The Triangle for Success. The Role of Notation. History of the UML. The Role of Process Quatrani_Ch.01.fm Page 1 Friday, October 27, 2000 9:02 AM Chapter 1 Introduction What Is Visual Modeling? The Triangle for Success The Role of Notation History of the UML The Role of Process What Is Iterative

More information

for TOGAF Practitioners Hands-on training to deliver an Architecture Project using the TOGAF Architecture Development Method

for TOGAF Practitioners Hands-on training to deliver an Architecture Project using the TOGAF Architecture Development Method Course Syllabus for 3 days Expert led Enterprise Architect hands-on training "An Architect, in the subtlest application of the word, describes one able to engage and arrange all elements of an environment

More information

Service Delivery Platforms and the Evolving Role of OSS by Doug Bellinger

Service Delivery Platforms and the Evolving Role of OSS by Doug Bellinger www.pipelinepub.com Volume 4, Issue 8 Service Delivery Platforms and the Evolving Role of OSS by Doug Bellinger Introduction As Service Delivery Platforms (SDP) for IMS-based services are gradually embraced

More information

Spemmet - A Tool for Modeling Software Processes with SPEM

Spemmet - A Tool for Modeling Software Processes with SPEM Spemmet - A Tool for Modeling Software Processes with SPEM Tuomas Mäkilä tuomas.makila@it.utu.fi Antero Järvi antero.jarvi@it.utu.fi Abstract: The software development process has many unique attributes

More information

Deliver robust products at reduced cost by linking model-driven software testing to quality management.

Deliver robust products at reduced cost by linking model-driven software testing to quality management. Quality management White paper September 2009 Deliver robust products at reduced cost by linking model-driven software testing to quality management. Page 2 Contents 2 Closing the productivity gap between

More information

Designing and debugging real-time distributed systems

Designing and debugging real-time distributed systems Designing and debugging real-time distributed systems By Geoff Revill, RTI This article identifies the issues of real-time distributed system development and discusses how development platforms and tools

More information

Software Paradigms (Lesson 10) Selected Topics in Software Architecture

Software Paradigms (Lesson 10) Selected Topics in Software Architecture Software Paradigms (Lesson 10) Selected Topics in Software Architecture Table of Contents 1 World-Wide-Web... 2 1.1 Basic Architectural Solution... 2 1.2 Designing WWW Applications... 7 2 CORBA... 11 2.1

More information

Understanding the Open Source Development Model. » The Linux Foundation. November 2011

Understanding the Open Source Development Model. » The Linux Foundation. November 2011 » The Linux Foundation Understanding the Open Source Development Model November 2011 By Ibrahim Haddad (PhD) and Brian Warner, The Linux Foundation A White Paper By The Linux Foundation This paper presents

More information

Control-M and Payment Card Industry Data Security Standard (PCI DSS)

Control-M and Payment Card Industry Data Security Standard (PCI DSS) Control-M and Payment Card Industry Data Security Standard (PCI DSS) White paper PAGE 1 OF 16 Copyright BMC Software, Inc. 2016 Contents Introduction...3 The Need...3 PCI DSS Related to Control-M...4 Control-M

More information

White Paper on RFP II: Abstract Syntax Tree Meta-Model

White Paper on RFP II: Abstract Syntax Tree Meta-Model White Paper on RFP II: Abstract Syntax Tree Meta-Model OMG Architecture Driven Modernization Task Force August 18, 2004 Contributors: Philip Newcomb, The Software Revolution, Inc. Ed Gentry, Blue Phoenix,

More information

Professional Services for Cloud Management Solutions

Professional Services for Cloud Management Solutions Professional Services for Cloud Management Solutions Accelerating Your Cloud Management Capabilities CEOs need people both internal staff and thirdparty providers who can help them think through their

More information

Symantec Data Center Migration Service

Symantec Data Center Migration Service Avoid unplanned downtime to critical business applications while controlling your costs and schedule The Symantec Data Center Migration Service helps you manage the risks and complexity of a migration

More information

IBM Rational Software Architect

IBM Rational Software Architect Unifying all aspects of software design and development IBM Rational Software Architect A complete design & development toolset Incorporates all the capabilities in IBM Rational Application Developer for

More information

Part 5. Verification and Validation

Part 5. Verification and Validation Software Engineering Part 5. Verification and Validation - Verification and Validation - Software Testing Ver. 1.7 This lecture note is based on materials from Ian Sommerville 2006. Anyone can use this

More information

Software Reuse and Component-Based Software Engineering

Software Reuse and Component-Based Software Engineering Software Reuse and Component-Based Software Engineering Minsoo Ryu Hanyang University msryu@hanyang.ac.kr Contents Software Reuse Components CBSE (Component-Based Software Engineering) Domain Engineering

More information

Smart Data Center Solutions

Smart Data Center Solutions Smart Data Center Solutions New Data Center Challenges Require New Solutions Data Center Architecture. Inside and Out. Data centers are mission-critical facilities. A silo-based approach to designing,

More information

Improved Database Development using SQL Compare

Improved Database Development using SQL Compare Improved Database Development using SQL Compare By David Atkinson and Brian Harris, Red Gate Software. October 2007 Introduction This white paper surveys several different methodologies of database development,

More information

CocoBase Delivers TOP TEN Enterprise Persistence Features For JPA Development! CocoBase Pure POJO

CocoBase Delivers TOP TEN Enterprise Persistence Features For JPA Development! CocoBase Pure POJO CocoBase Pure POJO Product Information V5 CocoBase Delivers TOP TEN Enterprise Persistence Features For JPA Development! CocoBase Provides A Complete Enterprise Solution For JPA Based Development. CocoBase

More information

The 7 Habits of Highly Effective API and Service Management

The 7 Habits of Highly Effective API and Service Management 7 Habits of Highly Effective API and Service Management: Introduction The 7 Habits of Highly Effective API and Service Management... A New Enterprise challenge has emerged. With the number of APIs growing

More information

Categorizing Migrations

Categorizing Migrations What to Migrate? Categorizing Migrations A version control repository contains two distinct types of data. The first type of data is the actual content of the directories and files themselves which are

More information

Database Systems: Design, Implementation, and Management Tenth Edition. Chapter 1 Database Systems

Database Systems: Design, Implementation, and Management Tenth Edition. Chapter 1 Database Systems Database Systems: Design, Implementation, and Management Tenth Edition Chapter 1 Database Systems Objectives In this chapter, you will learn: The difference between data and information What a database

More information

Full file at

Full file at Chapter 2 Data Warehousing True-False Questions 1. A real-time, enterprise-level data warehouse combined with a strategy for its use in decision support can leverage data to provide massive financial benefits

More information

Component-Based Software Engineering TIP

Component-Based Software Engineering TIP Component-Based Software Engineering TIP X LIU, School of Computing, Napier University This chapter will present a complete picture of how to develop software systems with components and system integration.

More information

PERSPECTIVE. End-to-end test automation A behaviordriven and tool-agnostic approach. Abstract

PERSPECTIVE. End-to-end test automation A behaviordriven and tool-agnostic approach. Abstract PERSPECTIVE End-to-end test automation A behaviordriven and tool-agnostic approach Anand Avinash Tambey Product Technical Architect, Infosys Abstract In today s fast changing world, IT is under constant

More information

Lecture 2: Software Engineering (a review)

Lecture 2: Software Engineering (a review) Lecture 2: Software Engineering (a review) Kenneth M. Anderson Object-Oriented Analysis and Design CSCI 6448 - Spring Semester, 2003 Credit where Credit is Due Some material presented in this lecture is

More information

DITA for Enterprise Business Documents Sub-committee Proposal Background Why an Enterprise Business Documents Sub committee

DITA for Enterprise Business Documents Sub-committee Proposal Background Why an Enterprise Business Documents Sub committee DITA for Enterprise Business Documents Sub-committee Proposal Background Why an Enterprise Business Documents Sub committee Documents initiate and record business change. It is easy to map some business

More information

Designing High-Performance Data Structures for MongoDB

Designing High-Performance Data Structures for MongoDB Designing High-Performance Data Structures for MongoDB The NoSQL Data Modeling Imperative Danny Sandwell, Product Marketing, erwin, Inc. Leigh Weston, Product Manager, erwin, Inc. Learn More at erwin.com

More information

Rethinking VDI: The Role of Client-Hosted Virtual Desktops. White Paper Virtual Computer, Inc. All Rights Reserved.

Rethinking VDI: The Role of Client-Hosted Virtual Desktops. White Paper Virtual Computer, Inc. All Rights Reserved. Rethinking VDI: The Role of Client-Hosted Virtual Desktops White Paper 2011 Virtual Computer, Inc. All Rights Reserved. www.virtualcomputer.com The Evolving Corporate Desktop Personal computers are now

More information

Key Ideas. OO Analysis and Design Foundation. Objectives. Adapted from slides 2005 John Wiley & Sons, Inc.

Key Ideas. OO Analysis and Design Foundation. Objectives. Adapted from slides 2005 John Wiley & Sons, Inc. Slide 1 Information Systems Development COMM005 (CSM03) Autumn Semester 2009 Dr. Jonathan Y. Clark Email: j.y.clark@surrey.ac.uk Course Website: www.computing.surrey.ac.uk/courses/csm03/isdmain.htm Course

More information

CA ERwin Data Profiler

CA ERwin Data Profiler PRODUCT BRIEF: CA ERWIN DATA PROFILER CA ERwin Data Profiler CA ERWIN DATA PROFILER HELPS ORGANIZATIONS LOWER THE COSTS AND RISK ASSOCIATED WITH DATA INTEGRATION BY PROVIDING REUSABLE, AUTOMATED, CROSS-DATA-SOURCE

More information

Three Key Considerations for Your Public Cloud Infrastructure Strategy

Three Key Considerations for Your Public Cloud Infrastructure Strategy GOING PUBLIC: Three Key Considerations for Your Public Cloud Infrastructure Strategy Steve Follin ISG WHITE PAPER 2018 Information Services Group, Inc. All Rights Reserved The Market Reality The race to

More information

The Analysis and Proposed Modifications to ISO/IEC Software Engineering Software Quality Requirements and Evaluation Quality Requirements

The Analysis and Proposed Modifications to ISO/IEC Software Engineering Software Quality Requirements and Evaluation Quality Requirements Journal of Software Engineering and Applications, 2016, 9, 112-127 Published Online April 2016 in SciRes. http://www.scirp.org/journal/jsea http://dx.doi.org/10.4236/jsea.2016.94010 The Analysis and Proposed

More information

Six Sigma in the datacenter drives a zero-defects culture

Six Sigma in the datacenter drives a zero-defects culture Six Sigma in the datacenter drives a zero-defects culture Situation Like many IT organizations, Microsoft IT wants to keep its global infrastructure available at all times. Scope, scale, and an environment

More information

Data Model Considerations for Radar Systems

Data Model Considerations for Radar Systems WHITEPAPER Data Model Considerations for Radar Systems Executive Summary The market demands that today s radar systems be designed to keep up with a rapidly changing threat environment, adapt to new technologies,

More information

The Migration/Modernization Dilemma

The Migration/Modernization Dilemma The Migration/Modernization Dilemma By William Calcagni www.languageportability.com 866.731.9977 Approaches to Legacy Conversion For many years businesses have sought to reduce costs by moving their legacy

More information

21ST century enterprise. HCL Technologies Presents. Roadmap for Data Center Transformation

21ST century enterprise. HCL Technologies Presents. Roadmap for Data Center Transformation 21ST century enterprise HCL Technologies Presents Roadmap for Data Center Transformation june 2016 21st Century Impact on Data Centers The rising wave of digitalization has changed the way IT impacts business.

More information

Analysis Exchange Framework Terms of Reference December 2016

Analysis Exchange Framework Terms of Reference December 2016 Analysis Exchange Framework Terms of Reference December 2016 Approved for Public Release; Distribution Unlimited. Case Number 16-4653 The views, opinions and/or findings contained in this report are those

More information

Chapter 4. Fundamental Concepts and Models

Chapter 4. Fundamental Concepts and Models Chapter 4. Fundamental Concepts and Models 4.1 Roles and Boundaries 4.2 Cloud Characteristics 4.3 Cloud Delivery Models 4.4 Cloud Deployment Models The upcoming sections cover introductory topic areas

More information

Sony Adopts Cisco Solution for Global IPv6 Project

Sony Adopts Cisco Solution for Global IPv6 Project Customer Case Study Sony Adopts Cisco Solution for Global IPv6 Project Sony aims to accelerate global collaboration and business across business units to realize goal of "One Sony." EXECUTIVE SUMMARY Customer

More information

Chapter 9. Software Testing

Chapter 9. Software Testing Chapter 9. Software Testing Table of Contents Objectives... 1 Introduction to software testing... 1 The testers... 2 The developers... 2 An independent testing team... 2 The customer... 2 Principles of

More information

Bringing DevOps to Service Provider Networks & Scoping New Operational Platform Requirements for SDN & NFV

Bringing DevOps to Service Provider Networks & Scoping New Operational Platform Requirements for SDN & NFV White Paper Bringing DevOps to Service Provider Networks & Scoping New Operational Platform Requirements for SDN & NFV Prepared by Caroline Chappell Practice Leader, Cloud & NFV, Heavy Reading www.heavyreading.com

More information

Question 1: What is a code walk-through, and how is it performed?

Question 1: What is a code walk-through, and how is it performed? Question 1: What is a code walk-through, and how is it performed? Response: Code walk-throughs have traditionally been viewed as informal evaluations of code, but more attention is being given to this

More information

FIVE BEST PRACTICES FOR ENSURING A SUCCESSFUL SQL SERVER MIGRATION

FIVE BEST PRACTICES FOR ENSURING A SUCCESSFUL SQL SERVER MIGRATION FIVE BEST PRACTICES FOR ENSURING A SUCCESSFUL SQL SERVER MIGRATION The process of planning and executing SQL Server migrations can be complex and risk-prone. This is a case where the right approach and

More information