Editor's Notes Better Software Requires a Better Process

Size: px
Start display at page:

Download "Editor's Notes Better Software Requires a Better Process"

Transcription

1

2 Editor's Notes Better Software Requires a Better Process Reader Mail Readers comment on Kurt Bittner's article in the December issue. Read the letters and Kurt's Response. Reader Question: How do I automate ClearCase commands on the NT side? I already know that you can write a shell script on the Unix side with ClearCase commands in it and execute that. Read Ralph Capasso's response. News Break Open Application Group (OAGI) Adopts Rational Rose Of course, companies are always looking for competitive strengths and improved business practices. In today's economy, this translates to a demand for better software. Note that the increased demand is not merely a symptom of "dotcom" fever. Rather, it has to do with a growing recognition that, throughout the new economy, software is the means for conducting business, tapping new markets, and connecting suppliers, manufacturers, and end users. While the Internet serves as a vital tool for transaction and communication, the software systems themselves - often on either end of an Internet connection - do the heavy lifting and represent the greatest challenges and opportunities for business growth. The Rational Unified Process, or "RUP," offers an excellent way for companies to improve the quality of their software systems by improving the software development process. In his cover story for our second issue of The Rational Edge, Philippe Kruchten, Rational's premier RUP guru, presents an introduction to RUP and its basic principles. Philippe provides different perspectives on the process, which links all the phases of software engineering into a single, integrated structure. It wasn't easy to find one visual metaphor for this story, so we chose four photos of the Øresund Fixed Link between Denmark and Sweden to illustrate the sort of efficiency and coordination that

3 to Model Pioneering Business Software Integration Specification RUP delivers to development teams. (If that seems like a stretch, then I confess: we simply can't resist cool photos of engineering marvels.) For those of you already familiar with RUP, take a look at Jason Bloomberg's piece on how RUP can be tailored to the specific demands of e-business (in the "Features" column), or Gary Evans' analysis of RUP for smaller software projects (in the "Technical" column). And there's much more - for project managers, Rational product users, and senior decision makers alike - covering the major areas of Rational's product offerings. Thank you for the since our first issue in December; in this issue we're publishing three of your queries with responses from Rational experts. Please send more questions and comments, and I will publish each of them as answers can be developed. Sincerely, Mike Perrow Editor Copyright Rational Software 2000 Privacy/Legal Information

4 What Is the Rational Unified Process? by Philippe Kruchten Rational Fellow Rational Software Canada What exactly is the Rational Unified Process, or RUP as many call it now? I can give several answers to this question, from different perspectives: What is the purpose of the RUP? It is a software engineering process, aimed at guiding software development organizations in their endeavors. How is the RUP designed and delivered? It is a process product, designed like any software product, and integrated with the Rational suites of software development tools. What is the structure of the RUP; how is it organized internally? The RUP has a very well-defined and regular structure, using an object-oriented approach for its description. How would an organization proceed to adopt the RUP? The RUP is a process framework that allows a software development organization to tailor or extend the RUP to match its specific needs. What will I find in the RUP? It captures many of modern software development's best practices harvested by Rational over the years, in a form suitable for a wide range of projects and organizations. The RUP Is a Software Engineering Process Many organizations have slowly become aware of just how important a well-defined and well-documented software development process is to the success of their software projects. The development of the CMM (Capability Maturity Model) by the Software Engineering Institute (SEI)

5 has become a beacon, a standard to which many organizations look, when they aim at attaining level 2, 3, or higher. Over the years, these organizations have collected their knowledge and shared it with their developers. This collective know-how often grows out of design methods, published textbooks, training programs, and small how-to notes amassed internally over several projects. Unfortunately, in practice, these internally developed processes often end up gathering dust in nice binders on a developer's shelf -- rarely updated, rapidly becoming obsolete, and almost never followed. Other software development organizations have no process at all, and need a starting point, an initial process to jump-start them on the path of faster development of better quality software products. The RUP can help both kinds of organizations, by providing them with a mature, rigorous, and flexible software engineering process. The RUP Is a Process Product The RUP is not just a book, a development method developed and published once and for all in paper form. "Software processes are software, too," wrote Lee Osterweil, Professor of Computer Science at the University of Massachusetts. In contrast with the dusty binder approach, the Rational Unified Process is designed, developed, delivered, and maintained like any software tool. The Rational Unified Process shares many characteristics with software products: Like a software product, the Rational Unified Process is designed and documented using the Unified Modeling Language (UML). An underlying object model, the Unified Software Process Model (USPM) provides a very coherent backbone to the process. It is delivered online using Web technology, not in books or binders, so it's literally at the developers' fingertips. Regular software upgrades are released by Rational Software approximately twice a year. So the process is never obsolete, and its users benefit from the latest development. All team members access the same version of the process. Because it is modular and in electronic form, it can be tailored and configured to suit the specific needs of a development organization, something that's hard to do with a book or a binder. It is integrated with the many software development tools in the Rational Suites, so developers can access process guidance within the tool they are using. Figure 1 shows a page from the RUP.

6 Figure 1: A Page from the RUP (View full size graphic in new window) The Architecture of the RUP The process itself has been designed using techniques similar to those for software design. In particular, it has an underlying object-oriented model, using UML. Figure 2 shows the overall architecture of the Rational Unified Process. The process has two structures or, if you prefer, two dimensions: The horizontal dimension represents time and shows the lifecycle aspects of the process as it unfolds. The vertical dimension represents core process disciplines (or workflows), which logically group software engineering activities by their nature. The first (horizontal) dimension represents the dynamic aspect of the process expressed in terms of cycles, phases, iterations, and milestones. In the RUP, a software product is designed and built in a succession of incremental iterations. This allows testing and validation of design ideas, as well as risk mitigation, to occur earlier in the lifecycle. The second (vertical) dimension represents the static aspect of the process described in terms of process components: activities, disciplines, artifacts, and roles.

7 Figure 2 - Two Dimensions of the RUP The RUP Is a Process Framework The Rational Unified Process is also a process framework that can be adapted and extended to suit the needs of an adopting organization. It is general and comprehensive enough to be used "as is," i.e., out-of-thebox, by many small-to-medium software development organizations, especially those that do not have a very strong process culture. But the adopting organization can also modify, adjust, and expand the Rational Unified Process to accommodate the specific needs, characteristics, constraints, and history of its organization, culture, and domain. A process should not be followed blindly, generating useless work and producing artifacts that are of little added value. Instead, the process must be made as lean as possible while still fulfilling its mission to help developers rapidly produce predictably high-quality software. The best practices of the adopting organization, along with its specific rules and procedures, should complement the process. The process elements that are likely to be modified, customized, added, or suppressed include artifacts, activities, workers, and workflows as well as guidelines and artifact templates. The Rational Unified Process itself contains the roles, activities, artifacts, guidelines, and examples necessary for its modification and configuration by the adopting organization. Moreover, these activities are also supported by the Rational Process Workbench (RPW) tool. This new tool uses a UML model of the Rational Unified Process to support process design and authoring activities, and the production of company-specific or project-specific RUP variants, called

8 development cases. Starting in 2000, the RUP contains several variants, or pre-packaged development cases for different types of software development organizations. The RUP Captures Software Development Best Practices The Rational Unified Process captures many of modern software development's best practices in a form suitable for a wide range of projects and organizations: Develop software iteratively. Manage requirements. Use component-based architectures. Visually model software. Continuously verify software quality. Control changes to software. 1. Develop Software Iteratively Most software teams still use a waterfall process for development projects, completing in strict sequence the phases of requirement analysis, design, implementation/integration, and test. This inefficient approach idles key team members for extended periods and defers testing until the end of the project lifecycle, when problems tend to be tough and expensive to resolve, and pose a serious threat to release deadlines. By contrast, RUP represents an iterative approach that is superior for a number of reasons: It lets you take into account changing requirements. The truth is that requirements usually change. Requirements change and "requirements creep" -- the addition of requirements that are unnecessary and/or not customer-driven as a project progresses -- have always been primary sources of project trouble, leading to late delivery, missed schedules, dissatisfied customers, and frustrated developers. Integration is not one "big bang" at the end; instead, elements are integrated progressively -- almost continuously. With RUP, what used to be a lengthy time of uncertainty and pain -- taking up to 40% of the total effort at the end of a project -- is broken down into six to nine smaller integrations involving fewer elements. Risks are usually discovered or addressed during integration. With the iterative approach, you can mitigate risks earlier. As you unroll the early iterations, you test all process components, exercising many aspects of the project, such as tools, off-the-shelf software, people skills, and so on. You can quickly see whether perceived risks prove to be real and also uncover new, unsuspected risks

9 when they are easier and less costly to address. Iterative development provides management with a means of making tactical changes to the product -- to compete with existing products, for example. It allows you to release a product early with reduced functionality to counter a move by a competitor, or to adopt another vendor for a given technology. Iteration facilitates reuse; it is easier to identify common parts as they are partially designed or implemented than to recognize them during planning. Design reviews in early iterations allow architects to spot potential opportunities for reuse, and then develop and mature common code for these opportunities in subsequent iterations. When you can correct errors over several iterations, the result is a more robust architecture. As the product moves beyond inception into elaboration, flaws are detected even in early iterations rather than during a massive testing phase at the end. Performance bottlenecks are discovered at a time when they can still be addressed, instead of creating panic on the eve of delivery. Developers can learn along the way, and their various abilities and specialties are more fully employed during the entire lifecycle. Testers start testing early, technical writers begin writing early, and so on. In a non-iterative development, the same people would be waiting around to begin their work, making plan after plan but not making any concrete progress. What can a tester test when the product consists of only three feet of design documentation on a shelf? In addition, training needs, or the need for additional people, are spotted early, during assessment reviews. The development process itself can be improved and refined along the way. The assessment at the end of an iteration not only looks at the status of the project from a product or schedule perspective, but also analyzes what should be changed in the organization and in the process to make it perform better in the next iteration. Project managers often resist the iterative approach, seeing it as a kind of endless and uncontrolled hacking. In the Rational Unified Process, the iterative approach is very controlled; the number, duration, and objectives of iterations are carefully planned, and the tasks and responsibilities of participants are well defined. In addition, objective measures of progress are captured. Some reworking takes place from one iteration to the next, but this, too, is carefully controlled. 2. Manage Requirements Requirements management is a systematic approach to eliciting, organizing, communicating, and managing the changing requirements of a software-intensive system or application. The benefits of effective requirements management are numerous: Better control of complex projects. This includes greater

10 understanding of the intended system behavior as well as prevention of requirements creep. Improved software quality and customer satisfaction. The fundamental measure of quality is whether a system does what it is supposed to do. With the Rational Unified Process, this can be more easily assessed because all stakeholders have a common understanding of what must be built and tested. Reduced project costs and delays. Fixing errors in requirements is very expensive. With effective requirements management, you can decrease these errors early in the development, thereby cutting project costs and preventing delays. Improved team communication. Requirements management facilitates the involvement of users early in the process, helping to ensure that the application meets their needs. Well-managed requirements build a common understanding of the project needs and commitments among the stakeholders: users, customers, management, designers, and testers. It is often difficult to look at a traditional object-oriented system model and tell how the system does what it is supposed to do. This difficulty stems from the lack of a consistent, visible thread through the system when it performs certain tasks. In the Rational Unified Process, use cases provide that thread by defining the behavior performed by a system. Use cases are not required in object orientation, nor are they a compulsory vehicle in the Rational Unified Process. Where they are appropriate, however, they provide an important link between system requirements and other development artifacts, such as design and tests. Other object-oriented methods provide use-case-like representation but use different names for it, such as scenarios or threads. The Rational Unified Process is a use-case-driven approach, which means that the use cases defined for the system can serve as the foundation for the rest of the development process. Use cases used for capturing requirements play a major role in several of the process workflows, especially design, test, user-interface design, and project management. They are also critical to business modeling. 3. Use Component-Based Architecture Use cases drive the Rational Unified Process throughout the entire lifecycle, but design activities center on architecture -- either system architecture or, for software-intensive systems, software architecture. The main focus of early iterations is to produce and validate a software architecture. In the initial development cycle, this takes the form of an executable architectural prototype that gradually evolves, through subsequent iterations, into the final system. The Rational Unified Process provides a methodical, systematic way to design, develop, and validate an architecture. It offers templates for describing an architecture based on the concept of multiple architectural views. It provides for the capture of architectural style, design rules, and

11 constraints. The design process component contains specific activities aimed at identifying architectural constraints and architecturally significant elements, as well as guidelines on how to make architectural choices. The management process shows how planning the early iterations takes into account the design of an architecture and the resolution of major technical risks. A component can be defined as a nontrivial piece of software: a module, package, or subsystem that fulfills a clear function, has a clear boundary, and can be integrated into a well-defined architecture. It is the physical realization of an abstraction in your design. Component-based development can proceed in several ways: In defining a modular architecture, you identify, isolate, design, develop, and test well-formed components. These components can be individually tested and gradually integrated to form the whole system. Furthermore, some of these components can be developed to be reusable, especially components that provide solutions to a wide range of common problems. Reusable components are typically larger than mere collections of utilities or class libraries. They form the basis of reuse within an organization, increasing overall software productivity and quality. More recently, the advent of commercially successful infrastructures supporting the concept of software components -- such as Common Object Request Broker Architecture (CORBA), the Internet, ActiveX, and JavaBeans -- has launched a whole industry of off-the-shelf components for various domains, allowing developers to buy and integrate components rather than develop them in-house. The first point above exploits the old concepts of modularity and encapsulation, bringing the concepts underlying object-oriented technology a step further. The final two points shift software development from programming software (one line at a time) to composing software (by assembling components). The Rational Unified Process supports component-based development in several ways. The iterative approach allows developers to progressively identify components and decide which ones to develop, which ones to reuse, and which ones to buy. The focus on software architecture allows you to articulate the structure. The architecture enumerates the components and the ways they integrate, as well as the fundamental mechanisms and patterns by which they interact. Concepts such as packages, subsystems, and layers are used during analysis and design to organize components and specify interfaces. Testing is organized around single components first and then is gradually expanded to include larger sets of integrated components.

12 4. Visually Model Software Models are simplifications of reality; they help us to understand and shape both a problem and its solution, and to comprehend large, complex systems that we could not otherwise understand as a whole. A large part of the Rational Unified Process is about developing and maintaining models of the system under development. The Unified Modeling Language (UML) is a graphical language for visualizing, specifying, constructing, and documenting the artifacts of a software-intensive system. It gives you a standard means of writing the system's blueprints, covering conceptual items such as business processes and system functions, as well as concrete items such as classes written in a specific programming language, database schemas, and reusable software components. While it provides the vocabulary to express various models, the UML does not tell you how to develop software. That is why Rational developed the Rational Unified Process, a guide to the effective use of the UML for modeling. It describes the models you need, why you need them, and how to construct them. RUP2000 uses UML version Continuously Verify Quality Often people ask why there is no worker in charge of quality in the Rational Unified Process. The answer is that quality is not added to a product by a few people. Instead, quality is the responsibility of every member of the development organization. In software development, our concern about quality is focused on two areas: product quality and process quality. Product quality -- The quality of the principal product being produced (the software or system) and all the elements it comprises (for example, components, subsystems, architecture, and so on). Process quality -- The degree to which an acceptable process (including measurements and criteria for quality) was implemented and adhered to during the manufacturing of the product. Additionally, process quality is concerned with the quality of the artifacts (such as iteration plans, test plans, use-case realizations, design model, and so on) produced in support of the principal product. 6. Control Changes to Software Particularly in an iterative development, many work products are often modified. By allowing flexibility in the planning and execution of the development and by allowing the requirements to evolve, iterative development emphasizes the vital issues of keeping track of changes and ensuring that everything and everyone is in sync. Focused closely on the needs of the development organization, change management is a systematic approach to managing changes in requirements, design, and implementation. It also covers the important activities of keeping track of defects, misunderstandings, and project commitments as well as

13 associating these activities with specific artifacts and releases. Change management is tied to configuration management and measurements. Who Is Using the Rational Unified Process? More than a thousand companies were using the Rational Unified Process at the end of They use it in various application domains, for both large and small projects. This shows the versatility and wide applicability of the Rational Unified Process. Here are examples of the various industry sectors around the world that use it: Telecommunications Transportation, aerospace, defense Manufacturing Financial services Systems integrators More than 50% of these users are either using the Rational Unified Process for e-business or planning to do so in the near future. This is a sign of change in our industry: as the time-to-market pressure increases, as well as the demand for quality, companies are looking at learning from others' experience, and are ready to adopt proven best practices. The way these organizations use the Rational Unified Process also varies greatly: some use it very formally; they have evolved their own company process from the Rational Unified Process, which they follow with great care. Other organizations have a more informal usage, taking the Rational Unified Process as a repository of advice, templates, and guidance that they use as they go along -- as a sort of "electronic coach" on software engineering. By working with these customers, observing how they use the RUP, listening to their feedback, looking at the additions they make to the process to address specific concerns, the RUP development team at Rational continues to refine the process for the benefit of all. To Learn More Rational Unified Process 2000, Rational Software, Cupertino, CA (2000) Philippe Kruchten, The Rational Unified Process -- An Introduction, 2nd ed., Addison-Wesley-Longman, Reading, MA (2000). Grady Booch et al., UML Users' Guide, Addison-Wesley-Longman, Reading, MA (2000) Ivar Jacobson et al., The Unified Software Development Process, Addison-Wesley-Longman, Reading, MA (1999). For more information on the products or services discussed in this article, please click here and follow the instructions provided.

14 Thank you! Copyright Rational Software 2001 Privacy/Legal Information

15 From Craft to Science: Rules for Software Design -- Part II by Koni Buhrer Software Engineering Specialist Rational Software Developing large software systems is notoriously difficult and unpredictable. Software projects are often canceled, finish late and over budget, or yield lowquality results -- setting software engineering apart from established engineering disciplines. While puzzling at first glance, the shortcomings of software "engineering" are easily explained by the fact that software development is a craft and not an engineering discipline. To become an engineering discipline, software development must undergo a paradigm shift away from trial and error toward a set of first principles. In this second installment of a two-part series for The Rational Edge, I will outline a set of design rules -- a "universal design pattern" associated with the first principle and axiomatic requirements that I proposed in the first installment. The universal design pattern features four types of design elements that, in combination, can describe an entire software system in a single view. Each type of design element is concerned with one -- and only one -- of the following four aspects of system operation: data structures and primitive operations external (hardware) interfaces system algorithms data flow and sequence of actions

16 Why Do We Need Design Rules? A first principle is an established, fundamental law or widely accepted truth that governs a specific field of science or engineering. For the construction field, for example, the law of gravity is a first principle. In the first part of this article, I proposed that software development, too, has a first principle, namely: "Software runs on and interacts with hardware -- hardware that has only finite speed." Although first principles are the basis of all scientific reasoning, they provide little practical help to an engineer faced with a design task. They are usually too abstract or too cumbersome to be employed directly. Yet first principles induce axiomatic requirements -- requirements that apply to each and every system that ever has been and ever will be created. For the construction field, for example, "The building shall withstand the force of gravity" is an axiomatic requirement. Obviously, any building of value must satisfy this requirement. Software development, too, has axiomatic requirements, as we saw in the first installment of this article: 1. The software must obtain input data from one or more external (hardware) interfaces. 2. The software must deliver output data to one or more external (hardware) interfaces. 3. The software must maintain internal data to be used and updated on every execution cycle. 4. The software must transform the input data into the output data (possibly using the internal data). 5. The software must perform the data transformation as quickly as possible. From their axiomatic requirements, all established engineering disciplines have derived a set of design rules. In construction, for example, a design rule is: "A weight-bearing wall must always be positioned on top of a weight-bearing wall." The purpose of such design rules is to help engineers create architectures that obviously satisfy the axiomatic requirements. If an architectural design obeys all the design rules, then the engineer knows, a priori, that it satisfies the axiomatic requirements and is therefore consistent with the discipline's first principles. Design rules allow an engineer to easily demonstrate that an architecture is sound before beginning construction work. So, let's find the design rules of software development that are associated with the five axiomatic requirements above. The Universal Design Pattern

17 Design rules come in various forms and shapes. They may be very explicit, such as a list of imperative statements of the form, "The software developer shall...." Or they may be more subtle, such as a set of predefined design elements featured in a design language. If a design language features classes, for example, then an implicit design rule would be, "Use classes for software design." Most software design methods provide both a set of predefined design elements and narrative rules about how to use those design elements. Let's start with the design elements. Note that existing design languages like the Unified Modeling Language (UML) offer little help. The UML, for example, is an ad hoc notation system with no underlying first principles or axiomatic requirements. Its design elements were chosen because of their great expressive power and because trial and error had proven their usefulness. These are not the design elements we are looking for. Instead, we need the most restrictive design elements possible -- elements that force software developers to create architectures that obviously satisfy our axiomatic requirements. 1 The following four types of design elements fit this description: Data Entity Data entities represent the software system's input data, output data, and internal data. I/O Server I/O servers encapsulate the external (hardware) interfaces with which the software interacts. I/O servers can also be pure input or output servers. Transformation Server Transformation servers perform the transformation from input data to output data, while possibly updating internal data. Data Flow Manager Data flow managers obtain input data from the I/O servers, invoke the transformation servers (which transform input data into output data), and deliver output data to the I/O servers. Data flow managers also own the internal data. Axiomatic requirements 1 through 4 clearly imply the presence of data entities, I/O servers, and transformation servers -- but why do we need data flow managers? The answer is that data flow managers make the flow of execution explicit. And this, in turn, allows a software developer to show that an architecture satisfies axiomatic requirement 5: "The software perform the data transformation as quickly as possible." Without data flow managers -- say, with the other design elements sending messages to one another -- the flow of execution would quickly become untraceable and execution timing thus unpredictable.

18 Figure 1: A Simple Architectural Design Figure 1 is an example of a very simple architectural design using the design elements introduced above. A more complicated design might have multiple data flow managers, multiple I/O servers, and multiple transformation servers. Each data flow manager might obtain input from several I/O servers at different points and deliver output to different I/O servers. Furthermore, there might be internal databases or abstract services represented by I/O servers. Each type of design element has a distinct nature and a distinct responsibility within a software system. The properties and responsibilities overlap very little, and the design elements of the universal design pattern complement one another nicely. Below is a list of narrative rules that describe in more detail how the design elements of the universal design pattern should be used and implemented. Data Entity Each data entity is a passive object of a stateless class. Data entities may be constructed from more basic data entities through aggregation and inheritance. The primitive operations of a data entity are implemented as methods of its class. The primitive operations of a data entity never invoke operations of a transformation server or an I/O server.

19 I/O Server Externally, I/O servers are passive system elements. They do not dispatch input data or act on other system elements. I/O servers have a fairly uniform external interface, based solely on the operations Get, Put, and Wait (and any combinations thereof). Internally, most I/O servers have an independent thread of control and maintain global state, to respond to hardware events and client (data flow manager) requests. I/O servers have the ability to buffer both input and output data. When a data flow manager invokes an I/O server operation, the I/O server typically retrieves input data from a buffer and stores output data into a buffer. However, the algorithms by which an I/O server manages its data buffers must be simple. An I/O server never invokes transformation server operations directly. Transformation Server Transformation servers are passive and stateless (and thus re-entrant) system elements. Each transformation server implements a data transformation operation, which is governed by a sequential, deterministic algorithm. The algorithm may be arbitrarily complex, but it must depend only on the arguments of the data transformation operation. Together the transformation servers implement all the algorithmic aspects of a software system. When a data flow manager invokes a transformation operation, all input data is explicitly passed in, all output data is explicitly passed out, and any internal data is explicitly passed in and out. A transformation server may update internal data only through the arguments of its transformation operation, but it may not maintain any global state. A transformation server never obtains input data directly from, or delivers output data directly to, an I/O server. Data Flow Manager The data flow managers are the active elements of a system; each one represents an independent thread of control. Data flow managers are the means by which a software developer should implement concurrency within a system (except for hardware drivers). Data flow managers act on I/O servers and transformation servers by invoking their operations. A data flow manager performs all its actions as quickly as possible. Note that for a data flow manager, "waiting for input" constitutes an action. 2 Data flow managers ensure that data flow and control flow always go handin-hand. Data is passed along and eventually passed back whenever a data flow manager invokes the operation of an I/O server or a transformation server. A data flow manager always obtains input data and delivers output data; no data is ever sent to or taken from a data flow manager. As a result, data flow managers de-couple the processing elements (I/O servers and transformation servers) of a system and make data flow and control flow predictable and traceable. Divide and Conquer

20 The universal design pattern has one key characteristic that sets it apart from any other design method or modeling language: it strictly divides operational concerns among its four types of design elements. The universal design pattern thus eliminates any need for different views of a design; with the universal design pattern, all aspects of a software system can be represented in a single view. But in that view, each type of design element is concerned with one -- and only one -- aspect of system operation: Data entities are concerned solely with data structures and primitive operations on those data structures. A data entity is an entirely passive object -- pure data 3 -- oblivious to system algorithms, external interfaces, or sequence of actions. I/O servers are concerned solely with encapsulating and servicing external (hardware) interfaces. An I/O server may be concerned with I/O timing, but not with scheduling or triggering other system actions. I/O servers do not perform any algorithmic tasks with respect to the input or output data they handle. Transformation servers are concerned solely with system algorithms. A transformation server never has to worry about where data is coming from or going to. All operations of a transformation server are sequential and deterministic. Transformation servers are not concerned with data representation, external interfaces, or sequence of actions. Data flow managers are concerned solely with data flow and sequence of actions. Data flow managers know what actions the system needs to perform and in what sequence, but they are not concerned with the details of those actions -- the algorithms and external (hardware) interfaces. While data flow managers own the internal data, they are not concerned with their representation. Other design approaches typically yield design elements that are concerned with multiple aspects of system operation and thus have a high level of internal complexity. For example, a design based on identifying objects in the problem space often yields some design elements that are concerned with all four aspects of system operation: data structures, external interfaces, algorithms, and system actions. Tying It All Together

21 You may wonder how the universal design pattern relates to established design languages like UML, the object-oriented paradigm, or modern programming languages. Am I proposing that we should turn our back on object orientation and revive data-flow modeling? Or that we should abandon UML and componentbased approaches? No, not at all. We just need to use these approaches more judiciously. Remember, each type of design element of the universal design pattern is concerned with only one specific aspect of system operation. Therefore, it is perfectly reasonable to use different design languages or modeling tools to describe each type of design element. A particular design language or modeling tool may be well suited to express some aspects of system operation, but not all of them. Similarly, a different programming language may be most appropriate for implementing each type of design element. Data entities can be modeled very well with object-oriented design languages such as UML. All these design languages are perfectly capable of representing classes, primitive operations, and relationships between classes. Data entities are best implemented in an object-oriented programming language. I/O servers are best modeled with skeletal, high-level language code. Except for UML/RT, no design language or visual modeling tool even comes close to adequately describing the low-level, hardware-related issues with which an I/O server must deal. Ada would be the programming language of choice for both design and implementation. To describe the architecture of a transformation server, flow charts may be perfectly adequate. After all, a transformation server operation is nothing more than an algorithm. Any means for describing an algorithm is therefore suitable for describing the architecture of a transformation server. Transformation servers are best implemented in a structured programming language. The data flow and object interaction aspects of data flow managers can be modeled with almost any design language. The sequence of actions performed by a data flow manager is best modeled with skeletal, high-level language code. Any structured programming language is suitable for implementing a data flow manager. It would be nice to have a design language or modeling tool that could visually describe the architecture of an entire system -- including all types of design elements -- in a uniform way. Although such a design language or modeling tool does not currently exist, there is an implementation language that maps surprisingly well to the elements of the universal design pattern: Ada95. 4 The popular object-oriented language C++, on the other hand, performs rather poorly when it comes to implementing data flow managers and I/O servers.

22 The universal design pattern is clearly inspired by real-time system issues. However, sequential batch applications also appear in the universal design pattern as an interesting special case. A sequential batch application can be modeled with one transformation server and any number of data entities, but without I/O servers or data flow managers. When we implement a simple program, we are really implementing a data transformation. The program takes some input data (input files), produces some output data (output files), and computes output data from input data according to a deterministic algorithm. And that's exactly what a transformation server does. Of course, I/O servers, and possibly data flow managers, are also present when we execute the simple program. They are not explicitly represented in the design though, but hidden within the operating system. And that should explain why design approaches that are so successful for sequential applications -- object orientation and flow charts, for example -- perform so poorly for real-time systems. In truth, they can work well for real-time systems if they are applied only to the right types of design elements, namely transformation servers and data entities. Object orientation and flow charts are inadequate, however, for designing data flow managers and I/O servers. Conclusion Identifying and understanding the first principle of software development allowed us to define a set of universal design rules -- the universal design pattern -- upon which all software architecture should be based. Using the universal design pattern, a software developer should be able to demonstrate that a software architecture is sound without performing any tests. This makes software design and implementation a predictable, repeatable engineering activity. Specifically, we have learned that: The hardware environment of a software system is not a constraint, but rather a primary driving force of software architecture and design. It is impossible to create quality architectures without taking the hardware environment into account at the earliest design stages. Hardware interactions are not a deployment issue and cannot be deferred or ignored during architectural design. At a high level, every (every!) software system can be modeled with four types of design elements that represent different aspects of system operation: data structures and primitive operations, external (hardware) interfaces, system algorithms, and data flow and sequence of actions. None of the design languages and modeling tools currently in use is adequate for developing and representing an entire software system. There is, however, a programming language powerful and versatile enough to implement all elements and aspects of a software system: Ada. The universal design pattern is truly universal. It applies to software of all

23 domains, and it can be used no matter what design method or modeling language the software developer otherwise employs. Studying the universal design pattern conveys deep insights into the very nature of software and software design. The universal design pattern has so many interesting properties that it is impossible to address them all in one article. In future issues of The Rational Edge, I will tell you more about these properties. 1 Restricting the freedom of a software developer would be a frightening idea if software development were to remain an art or craft -- nobody wants to constrain an artist. But if software development is to become an engineering discipline, then restricting the freedom of the developer is a necessity. 2 This may seem odd; let me elaborate. In many software systems that process real-time data, an input server sends a signal to a data processing element when input data arrives and thus invokes the data processing element. The universal design pattern does not allow servers to be pro-active in this way. A data flow manager always takes the lead by requesting input data from the server. If the data flow manager wants to wait for the data to arrive, then waiting for data becomes its action. At a very low level, of course, an I/O server thread may send a signal to a data flow manager thread to implement the conclusion of the "waiting for input" action, but we should not concern ourselves with such implementation details in the architectural design. 3 The famous exception to this rule is "files." A file object has a hidden, embedded hardware interface. It is acceptable to use such data entities in sequential applications. But in real-time systems, data entities with hidden, embedded hardware interfaces must be strictly avoided. 4 In fact, Ada maps so well that I can't believe it is pure coincidence. I'd like to ask the Ada designers if they had the universal design pattern in mind when they created the Ada language. For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you! Copyright Rational Software 2001 Privacy/Legal Information

24 Using the RUP for Enterprise e-business Transformation by Jason Bloomberg Director of Web Solutions WaveBend Solutions What makes today's e-business models so different from the technology-supported business models of the last 30 years? There are a handful of fundamental differences, but this article will focus on the changing role of the customer in an e-business. Today's technology enables the enterprise to offer personalized, "one-to-one" service to individual customers in a cost-effective manner. Building these relationships with customers is easier said than done, however, because the company must present a unified face to the customer, which means raising responsibility for the customer to the enterprise level. The level of technology change necessary for an enterprise to make such a fundamental transformation is daunting. Sales, marketing, and customer service operations must work together, and integrate seamlessly with backoffice operations. Companies must integrate their e-commerce, Customer Relationship Management (CRM), and business intelligence projects; and these systems must also be integrated with the existing legacy technology. Such a transformation typically takes years and involves multiple technology projects. Many factors must fall into place for a company to make the change successfully: a clear e-business vision and strategy; a robust enterprise architecture; and an iterative, componentbased methodology. That's where the Rational Unified Process (RUP) comes in. The Rational Unified Process and Large-Scale Systems The RUP is a software engineering process: a mature, flexible framework for building software systems. Although the RUP contains a business modeling workflow, the process wasn't originally intended for traditional business process reengineering efforts. In an e-business transformation

25 project, however, the technology becomes pervasive throughout the organization. As a result, the RUP now has a greater applicability beyond the software projects for which it was originally intended. Fortunately, Rational Software designed the RUP to be flexible. It is not designed to be used as-is right out of the box; customization is expected and supported. With some critical augmentations, the RUP can be used effectively to provide a framework for enterprise-wide e-business transformations. The approach to applying the RUP at the enterprise level begins with a paper by Rational Software, based on work originally done by Ivar Jacobson, Karin Palmquist, and Maria Ericsson in This paper, entitled "Developing Large-Scale Systems with the Rational Unified Process," 1 discusses how to apply the RUP to large-scale systems by treating them as systems of interconnected subsystems. The supersystem, or large-scale system (referred to in the article as a "superordinate system"), should be managed within the RUP framework, as should the component subsystems. Each subsystem is a system in its own right, but each subsystem takes on the role of a "black box" actor in the models of the other subsystems. Much of the design of the supersystem amounts to specifying the interfaces among the various subsystems. Even though each subsystem is itself a system that should be managed as a separate project, the subsystems are integrated into the supersystem following the implementation workflow. Each subsystem is treated as a component that is then integrated with the other subsystem components into the "implementation subsystem." This view of systems as subsystems in a larger system can then be extended recursively, with the larger system taking on the role of a subsystem in an even larger, more complex supersystem. Rational points out, however, that this decomposition of a very large system is complex and rarely necessary in practice. Enterprise e-business as a Large-Scale System In the 1995 article, Rational discusses the construction of large-scale software systems; in our case, we are discussing the construction of an enterprise e-business. Specific characteristics of e-business -- for example, the pervasive use of technology that puts the "e" in "e-business"- - makes the best practices upon which the RUP is built applicable to the more general case of e-business transformation. In particular, the use of component-based architectures, requirements management, and iterative development cycles apply very well in the e-business transformation scenario. When a collection of Web sites is itself a subsystem in the enterprise e- business, then the recursive decomposition of large-scale systems into subsystems that are themselves supersystems is particularly useful. The topmost supersystem consists of the enterprise-wide e-business implementation: in effect, a true e-business is itself the topmost supersystem. Its subsystems consist of major e-business systems, including CRM solutions, e-commerce packages, and the like. Each of

26 these systems, in turn, can be decomposed into subsystems -- individual Web sites, individual package implementations, etc. -- that are best implemented as individual projects. One of the main differences between enterprise e-business and traditional large-scale systems is the broad applicability of various e-business initiatives in the enterprise. Every department needs a Web site; every channel must access customer information; demand across the enterprise drives the supply chain. The risks of "balkanization" of the e-business implementation strategy are therefore especially high: overlooking possibilities for component reuse and creating incompatibilities among systems are common mistakes companies make when attempting to become e-businesses. The integration of these individual subsystems into a large-scale supersystem can mitigate these risks, especially when the subsystems are Web sites or other systems that will run on the same platform -- and as long as the supersystem has been designed properly to provide reuse of components among the subsystems. Designing for component reuse in an e-business supersystem, in turn, leads to data integration strategies that lower the risk of incompatibilities among subsystems -- for example: CORBA-based object architectures, or XML-based data schemas. A top-down architecture is the only way to achieve the economies of scale that come about from component reuse. At the high level, remember, the components are themselves systems. A high-level architecture might identify the fact that many parts of the employee intranet and the customer extranet are the same, and can thus be built with common elements. Often, the economies to be derived from a high-level architecture are simply the avoidance of redundant systems -- for example, customer databases. Not only are multiple customer databases uneconomical, but they can also impede the e-business transformation strategy. To present a single face to the customer, it is essential to integrate the enterprise's customer information. If a customer s the company and then calls the 800 number to see if they got the , then the customer service representative should have all the data necessary to respond effectively to the customer's needs. In addition, a recursive decomposition of an enterprise e-business can provide a means for building the high levels of scalability today's e- businesses require. It is impossible to correctly size an enterprise's customer database by working on the departmental level, for example. Only by considering the broader capacity and throughput requirements of the supersystem can the enterprise architect specify the appropriate characteristics of each of the component subsystems. The architects on each subsystem project can then take the specifications provided by the enterprise architect as a starting point for their own design artifacts.

27 Requirements Gathering for e-business Transformation Requirements gathering for e-business transformation has its own particular pitfalls, but it can be effectively managed using the RUP. Use cases at the supersystem level are necessarily more general than those at the subsystem levels. The supersystem use cases are usually split into use cases for the subsystems; each subsystem appears as an actor in the use cases for other subsystems. In this way, managing requirements hierarchically guarantees that the high-level e-business strategy can be traced down to individual subsystems. For example, a supersystem use case might be "customer requests information from the company." This use case could then be split into "customer requests information by ," "customer calls 800 number for information," etc. The workflow project and the call center project may be handled by separate teams at different times. By managing requirements and then building the design models at the enterprise level, though, the company can integrate its various customer communication channels into a cohesive system. Iterative Development Beyond the Software Iterative development is the RUP's key best practice for mitigating the risk that requirements might change during the development of a software system. Taking an iterative approach to e-business transformation -- not just the software development portions, but the organizational aspects, as well -- can be essential for mitigating a broader set of risks. Requirements are likely to change even more frequently than in a traditional software development project, because the overall project is driven directly by the corporate strategy, which, in turn, depends on the marketplace and other external forces. Requirements for traditional software development are typically only indirectly driven by external forces: the stakeholders are often on the departmental level, and have a narrower focus for their requirements than in the enterprise e-business case. The primary stakeholders for the e- business supersystem include customers and top executives, while stakeholders for subsystems include departmental employees (the call center manager, etc.). As a result, the potential for external change is greater in the e-business environment, and often occurs very rapidly compared to the length of a project lifecycle (one aspect of the notorious "Internet time" phenomenon). The longer project lifecycles of the e- business supersystem -- combined with more dynamic external forces -- increase the importance of an iterative approach. Furthermore, moving from a corporate "build one-to-one relationships with our customers" strategy to specifying functional requirements for multiple subsystems is a complex, multistep process in its own right. As a result, such projects have high risk. Using the requirements management

28 strategies and artifacts in the RUP, however, can provide the essential requirements traceability to the project, and keep the whole initiative from going off into the weeds. Another reason that an iterative approach to e-business transformation is critical to managing risk is that the component technologies are still relatively immature. Today's broad assortment of Web-enabled solutions is a far cry from the more predictable client/server and mainframe/terminal architectures of the past. It is likely (and recommended) that most of the subsystems in an e-business be commercial, off-the-shelf (COTS) packages. Even so, many of these packages, including CRM software, personalization engines, and ecommerce systems, are still lacking in features and robustness, and often do not integrate very well with one another. This segment of the software industry is also experiencing dramatic consolidation. Whatever packages the company selects this year may be very different a few years down the road. Therefore, the long-term strategy may best be divided into nearer term iterations. The supersystem project may be a multi-year project with long iterations but include some shorter-term subsystem projects that fit into a single iteration of the supersystem project. Managing the supersystem within the RUP framework can be an effective way of tracing requirements and managing change, allowing the enterprise to execute on its long-term strategy while maintaining the flexibility needed to produce results quickly. Conclusion E-business transformation in the enterprise is extraordinarily difficult. No system is unaffected, and no business process is undisturbed. Technology becomes integrated with the day-to-day operations of the business, both internally and externally. Not only do customers use the enterprise software; they actually rely upon the software to do business with the company. And without customers, there is no business, pure and simple. As Rational Software says, "the software is the business." It is critical to the success of companies undergoing e-business transformation that they follow a structured framework for designing an enterprise architecture, managing requirements, and dealing with change. The Rational Unified Process has the maturity and flexibility to provide this framework. 1 This article is available at For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

29 Copyright Rational Software 2001 Privacy/Legal Information

30 Requirements Frameworks Cut Development Costs and Time to Market by Jim Heumann Requirements Management Evangelist Rational Software In a business context, every software application represents an attempt to solve a business problem. And although every business is unique, in the New Economy, many companies are facing identical problems and challenges: How can we sell goods to consumers via the Web? How can we make more timely online information available to our distributors, field sales staff, and customers while maintaining corporate security? In addition, numerous companies and consortiums are looking for ways to bring buyers and sellers together via the Web. Business-to-business Net Markets are increasingly popular forums for exchanging goods and services that can save both sides enormous amounts of time and money. For buyers, they are a good way to locate new suppliers, lower purchasing costs, streamline delivery, and ultimately reduce time-to-market for new products. For suppliers, a Net Market can help attract new customers, lower inventory costs, and reduce sales expenses. When multiple businesses build software to solve common business problems such as these, they must also generate nearly identical requirements for their applications. Wouldn't it be great if they could somehow get together and save each other from reinventing the wheel each time someone begins a development project? Giving Software Teams a Head Start That's what a requirements framework is all about. A requirements framework provides a set of pre-configured analysis artifacts, including Rational Rose model files with associated Rational RequisitePro use case documents and requirements, and a Rational RequisitePro requirements database that includes additional requirements and specifications

31 developed for specific project needs. You can use the framework artifacts verbatim or refine them to suit your application. The goal of a requirements framework is to give software teams a head start when they begin a new application to solve a business problem that others have already tackled successfully. Take the Net Markets we mentioned above, for example. Many industries are already transacting huge chunks of business through such markets, including steel (e-steel at and energy (Altra at In a recent Newsweek interview, General Electric Chairman Jack Welch announced that his company plans to purchase $6 billion in materials through online auction sites this year. Although the commodities for these specialized markets differ, the architecture of their software is largely the same. To help organizations develop more of these useful sites, Rational now offers a Net Market Edition Requirements Framework, available free of charge at Suppose your company wants to create a Net Market for the plumbing industry. Without a requirements framework, your development team would have to start from scratch by doing a complete analysis and defining all requirements for the software they want to write. By using Rational's Net Market Edition Requirements Framework, however, they could cut many steps from their analysis and requirements processes, shorten the software development lifecycle, get the plumbing Net Market up and running quickly, and enable your company to start generating revenue within a shorter time. To understand more about these benefits, let's look at the approach to software development that underlies the framework. Requirements Frameworks and the Rational Unified Process The Net Market Edition Requirements Framework is based on the Rational Unified Process (RUP), a comprehensive software development framework that comprises six core workflows: Business Modeling, Requirements, Analysis & Design, Implementation, Test, and Deployment. Each of these workflows, in turn, contains its own iterative activities, which progress incrementally throughout the project lifecycle. A good requirements framework relates to the first two workflows: Business Modeling and Requirements. It contains predefined artifacts (documents, requirements, and UML-based visual models) that would normally be completed for these workflows. Business Modeling. Software teams sometimes dismiss this as a non-technical activity and skip the discipline of constructing a business model. But business modeling is an essential step for any development project. It helps a project team meet several goals:

32 Understand the structure and dynamics of the organization in which their system will be deployed (the target organization); Understand current problems in the target organization and identify improvement potentials; Ensure that customers, end users, and developers have a common understanding of the target organization; Derive the system requirements needed to support the target organization. In Rational's Net Market Edition Requirements Framework, for example, the business use case model contains specifications for processes such as "execute trade" and "order fulfillment." The business object model contains business workers such as "Market Analyst" and "Net Market Operation," plus business objects such as "Auction Rules," "Invoice," and "Purchase Order." These models give a team charged with the development of a new Net Market a good understanding of where the software fits into the overall business. They are also the basis for validating that the software will solve the right problem. Requirements. Figure 1: Net Market Operations Business Use Case Diagram The other workflow for which the requirements framework supplies artifacts is Requirements. One purpose of the Requirements Workflow is to establish and maintain agreement with the customers and other stakeholders on what the system should do and to provide system developers with a better understanding of the system requirements. Although producing a well-specified set of requirements before starting a project is not a simple task, organizations have found that such requirements are highly leveragable. They can significantly reduce time to market as well as increase the quality of the end result. The specific artifact RUP provides for Requirements is the UML use case model, along with supporting documents and a requirements database. Use cases specify large "chunks" of functionality that provide results of value to an "actor" (a person or system with which the new system will

33 communicate). Net Market use cases include integral activities such as "Do Trade," "Approve Purchase," and "Find Product." Figure 2: Net Market Use Case Diagram. As shown in Figure 2, these use cases directly jump-start other parts of the development process. Specifically: They provide direct inputs for analysis and design. They enable architects to identify high-risk areas to implement first. Architects also analyze use cases to find the key entities (objects) the software solution will contain. Designers analyze each use case diagram to find collaborating objects that will work together in implementing the use case. For testers, use case diagrams provide text that describes a series of event flows, or step-by-step specifications, explaining how a user interacts with the system. These event flows translate fairly directly into test cases, which testers can begin early in the development lifecycle. To illustrate this last point, consider the following use case specification, which comes directly from Rational's Net Market Edition Requirements Framework.

34 Use Case Specification: Find Product Find Product Brief Description [The description should briefly convey the role and purpose of the use case. A single paragraph should suffice for this description.] This use case allows Buyers to search for particular products. Many Net Market providers will create a catalog describing all products available on the market. This use case gives Buyers access to that catalog. Flow of Events Basic Flow [This use case starts when the actor does something. An actor always initiates use Cases. The use case should describe what the actor does and what the system does in response. It should be phrased in the form of a dialog between the actor and the system.] The use case starts when the Domain Expert actor decides to look at product on the Net Market. The Domain Expert begins the use case by accessing the catalog. 1. The System actor presents the Buyer with a choice of ways to access the catalog. In general, the system should support: a. Searching the catalog: allow buyers to enter search criteria and return matching products b. Browsing the catalog: allow buyers to navigate the entire catalog, viewing categories and subcategories and the products in those categories and subcategories. 2. The Net Market may provide a search-constrained browse where the Domain Expert can enter search terms and then browse the catalog seeing only products that meet the search criteria. (Note: these access methods are described in more detail below and in the supplementary requirements.) 3. If the Domain Expert chooses to enter search criteria, the system presents a list of available search properties. These include product names, model numbers, manufacturers, and any other properties necessary to identify a particular product. 4. The Domain Expert fills in values or patterns for these properties. 5. The system then returns a list of products meeting the search criteria. 6. Some Net Markets may provide a tool for comparing the products in the list of search results. Such comparison often takes the form of a tabular display listing each product and its relevant properties.

35 7. If the Net Market provides such a tool, the system will give the Domain Expert the option of using the tool. 8. When the Domain Expert selects the tool, the system will generate the comparison display. 9. If the Domain Expert chooses to browse the catalog, the system presents a hierarchical display of product categories and subcategories. This display reflects the catalog taxonomy designed during Net Market construction and described in the catalog management supplementary requirements. 10. As the Domain Expert navigates the hierarchy, the system displays the products or number of products in the catalog at or below the current level in the hierarchy. 11. The Domain Expert selects one or more products, or product categories. 12. If the Domain Expert does not see a product that fits her needs, she may solicit help from other Net Market Customer actors. Include the Chat/Collaboration use case here. 13. If the Domain Expert does not find a product to fit her needs, she may request additional help from a Net Market Operator actor. Include the Provide Customer Service use case here. 14. During the Domain Expert's interaction with the Catalog Manager, the System actor invokes the Logging use case to record catalog interactions. These interactions may be subsequently used to improve the Catalog Manager's ability to satisfy the Domain Expert's needs. Include the Logging use case here. 15. The use case ends. So a requirements framework saves time not only for front-end business modeling and requirements, but also throughout the development lifecycle, by giving architects, designers, and testers a head start, too. It facilitates RUP's iterative development model by allowing most lifecycle activities to proceed simultaneously rather than in rigid sequence. The Rational Net Market Edition Requirements Framework download contains a Rational Rose model that holds the business model and the use case model, and a Rational RequisitePro project that contains requirements and documents that detail the business and system use cases. If your organization is building a Net Market, then you can realize tremendous savings in both development costs and time to market by taking advantage of the knowledge and expertise built into this framework. We urge you to try it and let us know how it works for you! For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

36 Copyright Rational Software 2001 Privacy/Legal Information

37 How Technical Writers Use Rational ClearCase by Liz Augustine Technical Writer Rational Software Every few weeks, one of my technical writing colleagues sends me a message like this one: "Help! Our group is starting to use Rational ClearCase! How do you recommend that the writers here set up a workable ClearCase environment?" This article attempts to answer that question. What Are Some Key Benefits of Using Rational ClearCase? Technical writers, like software developers, use Rational ClearCase to track changes to their work and to ensure that they can reliably repeat the development processes that their group follows. For example, many writers who don't work with Rational ClearCase report confusion when they hand their work off to an editor. It can sometimes be hard to determine which is the latest set of files. It can also be hard to prevent two people from making changes to the same file at the same time. When you work with Rational ClearCase, you don't have to worry about these issues. About Rational ClearCase Rational ClearCase is a configuration management system designed to help software development teams track the files and directories they use to create software. ClearCase helps you manage the development and build process and enforce sitespecific development policies. Rational ClearCase stores its files and directories in a database called a VOB (Versioned Object

38 ClearCase tracks each version for you so that you always know which one is the latest. And if someone else is working on a file, ClearCase tells you, letting you decide -- before you make changes -- whether you want to proceed. The advantages of using Rational ClearCase became crystal clear to me a few months ago, when a manager told me that it was finally time for translators to start localizing our documentation for Rational customers around the globe. The only problem was that her team wanted to start with the files we had finished three months earlier -- and we had already started work on our next release. Using a traditional system, this would have been a disaster. The three-month-old files would already be overwritten or hard to find. Even with an undisciplined use of a configuration management program, the Base). For each file or directory, ClearCase keeps track of each version you create. How does Rational ClearCase know which version of each file you want to work with? You use a ClearCase view to select one version of each file or directory to use in your workspace. When you create a view, ClearCase creates a default configuration that shows you the latest version. Most writers I know use this default view and never have to change it. Rational ClearCase stores files so that you always have access to them. You can do anything you want to a file (open it, print it) until you want to make permanent changes. At that point, you check out the file, edit it and save your changes, then check in the file. This set of changes becomes the next version, and you start the cycle again. situation wouldn't have been much better -- it would have taken days, if not weeks, to sort out. Because we were using some simple features of Rational ClearCase (labeling, which I'll describe later), I was able to give the manager access to the correct files within minutes. How Do Writers Typically Use Rational ClearCase? Writers typically use Rational ClearCase to store source material - the files that we edit in order to produce books, Help files, and other material. For example, in your daily work, you might work with Adobe FrameMaker, Microsoft Word, Microsoft FrontPage, ForeFront ForeHelp, or ehelp RoboHelp. Because writers usually work with tools that produce binary files (think Word's.doc files or Frame's.fm files), we don't typically work in parallel. That is, writers tend to organize their work so that just one person works on a file at a time. In this respect we diverge from software developers who primarily work with text-based files when they write code. In fact, it's quite common for developers to work in parallel -- they use Rational ClearCase's branching and merging capabilities so that two people can work on the same file at the same time. As a result, developers tend to have a more complicated approach to using

39 Rational ClearCase, and they may encourage writers to adopt the same approach. In most situations, though, you can usually use ClearCase in its simplest form, working on the "main" line of development, and never using branches. At Rational, my group keeps documentation files in a VOB that we've nicknamed the doc VOB. We tend to have our own, private VOB because we work differently from the developers. Although these details may sound complicated, you can pick up the basic concepts very quickly. I make some recommendations about getting started at the end of this article. How Do You Organize the Directory Structure? There are many ways to organize directories in a doc VOB, and the ultimate correct approach is the one that works for your group. We have found that the following structure works well for us: Directory \common \L10N.I18N \projects \release-notes \templates \online-help <book> Comments Contains files that are shared by more than one book. For example, this directory might be a good home for a preface boilerplate, your copyright page, and so on. Contains files related to localization (L, 10 letters, and N), and internationalization (I, 18 letters, and N). Contains files related to non-deliverable projects, such as infrastructure. It would be nice to eliminate these, but we seem to need them in every release. Contains our group's templates, the files that give our documentation visual consistency. Subdirectories might be \online and \print. Contains one subdirectory for each online help project, for example, \install and \licensing. We place files for each book in their own top-level directory. We name the book directories so that it's easy to guess at their contents. Book directories sometimes contain planning documents as well as the book files; your group needs to decide what works best. Note that this structure assumes you are working on just one product. If

40 you work on a larger project in which you are creating more than one product, you might want to organize your VOB so that the top level contains directories for each product plus some of the project-related directories (for example, \common, \L10N.I18N, \projects, \templates). Subdirectories under each product directory would contain books and online help. What Files Do You Keep in Rational ClearCase? You usually keep source files in Rational ClearCase. In fact, ClearCase is sometimes called a source control system. The types of files you store in the doc VOB depend on the tools you are using. Here are some examples: Tool Adobe FrameMaker Microsoft Word Forefront ForeHelp ehelp RoboHelp Files to Keep Under Source Control.fm (Frame files).book (book files) graphics and other files included by reference into your book.doc (the basic Word file).fhb.rtf (the Rich Text Format file that you edit).hpj (Winhelp project file) graphics files that you use with your help project These tools create additional files; because you don't need these files to recreate the final product, there's no need to keep them under source control. For example, you don't need to keep backup files or files that your tools create during an intermediate step of a build. And you don't need to keep final results (such as.hlp files) in your doc VOB because your tools generate these files, and you don't edit them after they're created. (You may want to keep final results in a staging VOB if your group uses one; I discuss staging below.) Hints for Using Rational ClearCase Here are a few hints for successfully using Rational ClearCase on a daily basis: When you are about to edit a file, check it out. Technically, you can always edit a file without first checking it out; you just need to check out a file when you're ready to save it. However, performing a checkout before you start editing helps you avoid collisions with other writers who might want to work on the same file. Check in your files frequently, either when you reach a minor

41 milestone, or at least every couple of days. Performing a check-in creates a record of your work, should you decide to restart from an earlier point. Also, files stored in Rational ClearCase are often backed up with greater regularity than are files stored on individual workstations. So checking in files provides greater security for your work. Always open and work on files in your Rational ClearCase view; don't copy your files to a non-clearcase directory to work on them. When you work around ClearCase, you also work around the protections that ClearCase provides for you. For example, once you make a copy of a file stored in ClearCase, you now have the old problem of figuring out which file is the most recent or which is the one you should be working with. How Do You Use Rational ClearCase for "Staging"? Your release engineers are responsible for gathering all the files produced by your group and then creating a final product. They might build software into libraries (DLLs) or executables (EXEs). They also need to incorporate files that the writers on your team want to deliver to customers. To pass these files around, we use what we call a "staging" VOB, a separate area that contains just the finished files that will become part of a build. In some groups, release engineers build files for the writers. In our group, however, the writers build their own files. Once we test the files, we check out the previous version from the staging VOB, copy our built files over the checked-out files, and check in the new files. In our group, we usually stage online help files (.hlp and.cnt) and built book files, which we deliver to the customer as.pdf files. Why go to all this trouble? First, when you use a staging VOB, you always know that your release engineers are building the product with the right version of each file. And second, by using a staging VOB, your release engineers can always reliably recreate a version of your product, no matter how many versions you've built since. When Should You Label Files? A label is just a human-readable marker on one version of a file. I recommend that you label files (and directories containing the files) when you reach a major milestone such as a beta or final release. Your set of labels provides a trail of breadcrumbs that allows you to retrieve an entire release-worth of source files, then re-create the final files for that release. For example, if the code names for your releases are based on types of bread, you might have the following labels in your VOB: RYE_ALPHA, RYE_BETA, RYE_FINAL MARBLE_ALPHA, MARBLE_BETA1, MARBLE_BETA2, MARBLE_FINAL

42 ANADAMA_BETA, ANADAMA_FINAL This is one area where you want to exercise consistent discipline: label your files as soon as you reach your milestone. Otherwise, you will spend much time trying to recapture a past release. Where Do You Go From Here? So now what? How do you get started? I recommend these starting points: Take the Rational ClearCase tutorial, which will give you a good foundation in the basics you'll need as you use ClearCase every day. Take a Rational ClearCase training course (see for more information). Work with the Rational ClearCase experts in your group to get help with setting up your environment. These experts might include your release engineer, your buildmeister, a project leader, or what we here call "the resident smart person." Good luck! For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you! Copyright Rational Software 2001 Privacy/Legal Information

43 Quality by Design: Enabling Cost-Effective Comprehensive Component Testing by Brian Bryson Technology Evangelist Rational Software Comprehensive unit testing, a key strategy for infusing quality into the software development process, has not yet gained widespread acceptance. This article examines the obstacles impeding its acceptance and introduces new technology from Rational Software designed to overcome them. Last spring, one of my co-workers learned that his car was being recalled for some component defects. As it turned out, this proved to be little more than an inconvenience for him. He took the car to the dealer, the dealer replaced the defective components at no cost, and he got his vehicle back a few hours later. No harm done, save for wasting a few hours at the dealer instead of being chained to his desk. From the automotive manufacturer's point of view, however, the problem was much more serious. More than 16,500 owners were affected by that recall, and the total cost to the organization would eventually exceed $114 million dollars. Oops. It's hardly a stretch to conclude that the manufacturer could have saved a ton of money by finding that defect a little earlier in the manufacturing process. Even if they'd detected it after all the cars were assembled but still not shipped, the savings would have been significant. Plus, they could have avoided most of the embarrassment, customer frustration, and ill will. Now think how much they they could have saved had they discovered the problem on the drawing board... Quality by Design and Comprehensive Unit Testing The Quality by Design approach to producing a product encompasses a series of strategies, processes, and practices that infuse quality into the

44 early stages of development. It is neither new nor unproven. It was used to build the Boeing 777 aircraft, for example. More than 90% of the testing for this plane was completed against computer design models. Only one prototype was actually constructed before the first aircraft was assembled, and savings were in the millions. In his Software Project Survival Guide, published in 1997, author Steve McConnell validates the Quality by Design approach by reporting that fixing defects before coding costs times less than fixing those found either at or after product delivery time. These are impressive numbers, and they attracted a lot of notice in the business world. So why don't we all pursue a Quality by Design approach? In fact, why haven't we been doing so for years? These questions are too complex -- and Quality by Design is too comprehensive a topic -- to cover in one short article. But we can begin to understand both the approach and why there has been great resistance to adopting it by looking at one aspect of Quality by Design in relation to software development: comprehensive unit testing. Removing Obstacles to Comprehensive Unit Testing Although software developers everywhere acknowledge that there are tremendous benefits associated with comprehensive unit testing, the practice is far from commonplace, especially for testing middle-tier, GUIless components. Why? Because it is time consuming and tedious. And in the past, the costs of overcoming these obstacles frequently outweighed the benefits. One big problem is that most tests are tailored for a particular component; there is little opportunity for re-use. Development teams are under extreme time pressure, so they feel obliged to focus on developing the application itself to stay on schedule. Typically, they view the process of building test harnesses and stubs from scratch, using them, and then throwing them away project after project as wasteful. So they focus their limited resources exclusively on writing code for components rather than on testing them. Fortunately, it doesn't have to be that way. Rational has just introduced a new technology that enables cost-effective comprehensive component testing. Rational QualityArchitect is a part of Rational's continuing efforts to provide developers with the tools they need to deliver higher quality software in less time. Rational QualityArchitect eliminates the most tedious and difficult aspects of component testing by leveraging the knowledge captured in visual models that automatically generate test code. Developers can focus on the actual test cases they need to create instead of spending time writing error-prone, throwaway test code. Component Testing Without Rational QualityArchitect To better understand how this new product makes comprehensive unit testing more achievable, let's take a closer look at some of the challenges of testing components without QualityArchitect.

45 Figure 1: Four Components for Testing throwaway test drivers and stubs. Figure 1 depicts four untested architectural components. Consider the implications of developing tests for Component B, which is currently ready for testing. Quite likely the other components (A, C, and D) are not yet ready for testing, and even if they are, they may contain defects that would blur the test results and make the job of tracking down problems in Component B that much more difficult. For these reasons, developers typically write their own Now consider the complexity of the test driver requirements. A test driver to simulate the behavior of Component A must drive Component B, make calls into it, provide a range of inputs, and record responses. Meanwhile, all of the functions in Components C and D that Component B relies on must be stubbed out, and they must return appropriate results based on the input from Component B. Sounds like a complicated recipe for alphabet soup, doesn't it? In addition, even after tests are completed for individual components, there are still significant challenges to overcome. In scenario testing, two or more components are brought together to test a given sequence of calls. If the client software does not already exist, developers need to spend time creating a mock client to drive the scenario. Some development teams report that they spend more than half of their entire development time creating these test harnesses, which are seldom reusable. Component Testing With Rational QualityArchitect Rational QualityArchitect provides the key to cost-effective comprehensive unit testing: leverage the artifacts developers created earlier in the process -- their visual models -- to generate test harnesses. Once developers know how each component has to behave, they capture that knowledge in a model. The visual models that developers can produce with Rational Rose are especially powerful in this respect, as they are used to generate the code for these components automatically. That's why Rational QualityArchitect is packaged as a component of Rational Rose Enterprise. For unit testing, developers need to accomplish three objectives: 1. Test the individual methods of a single software component.

46 2. Test multiple methods across multiple components in sequence. 3. Generate stubs for incomplete or non-existent components so that the testing of one component is not reliant upon the existence of other components. Each of these three tests, in turn, is made up of two components: A. The test harness or skeleton code that drives the test. B. The test case data. That's it. Simple, no? Let's look at an example to make this a little more concrete. Say I'm going to test a single Enterprise JavaBean (EJB) component. I need to do two things: First, create all the test code (A) that will connect to the server where the EJB resides, then instantiate the bean, call the operations on the bean, and validate the returned results. Then, create the test data (B) that is used when calling the individual operations. Although creating the test data is more challenging, creating the test code is actually more time consuming -- and tedious. But remember: All the information needed to create the test code already resides in my visual model. The visual model contains a structural description of the component and its operations, plus its operation's arguments and return value types. Whether the visual model was created by analysts and developers during the design stage or reverse engineered based on live components is irrelevant. All the inputs for creating test skeleton code are there for me. This is where Rational QualityArchitect actually takes over. By examining the structure of a component as set out in a visual model, it can generate all the test code necessary for a single component or for a test involving a sequence of operations across multiple components. QualityArchitect can even generate stub components to act as placeholders until components in development are deployed. The test code, however, is only half of the solution. I still need test data. Here, QualityArchitect makes my job as a developer easier. No longer chained to the tedious process of having to create test code, I can focus attention on creating interesting and meaningful test data. QualityArchitect can even help out by generating random test data where specific cases aren't necessary. Where specific test data is necessary, QualityArchitect provides a simple spreadsheet-like interface for me to enter the data. Cost Savings, Time Savings, and No Recalls Without Rational QualityArchitect, the early testing required for

47 comprehensive unit testing is such a time-consuming and inefficient undertaking that, despite its obvious value, many organizations forgo it. With Rational QualityArchitect, early testing becomes truly feasible, because QualityArchitect generates test harnesses and stubs automatically -- and not just once, but incrementally, as the model evolves throughout development. For a developer doing unit testing, Rational QualityArchitect virtually eliminates the time-consuming work of creating throwaway code, leaving only the simple task of populating data into a spreadsheet. Most important, however, is that all of this testing is done off the visual model, at the very earliest stages of development. In effect, by leveraging pre-existing assets for testing operations, Rational QualityArchitect enables a development team to adopt a Quality by Design approach without stealing a lot of time from basic development work. Although recalls are not a common practice in the software industry, the consequences of a defective software system can be even more expensive - - and just as devastating -- as those of a defective automobile. The goal of using a Quality by Design approach for every software project is worth striving for. Early testing might have saved a certain auto maker $114 million, and it can yield similar savings for companies that create ambitious software systems. Boeing has already proved that you can safely test an entire aircraft based on computer designs. Rational QualityArchitect ensures that you can do the same for complex software systems. For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you! Copyright Rational Software 2001 Privacy/Legal Information

48 Open Applications Group (OAGI) Adopts Rational Rose to Model Pioneering Business Software Integration Specification by Barry Ewell Senior Product Marketing Manager Visual Modeling and Developer Tools Rational Software Imagine what it would be like if every business application you purchased or every upgrade to your current system were simply a matter of "plug and play". No worry about internal integration to packaged applications, legacy systems, or homegrown development. And no concerns about external integration with respect to partners, customers, suppliers, and transportation providers. Unfortunately, most customer organizations never come close to realizing this dream. Why? Because there are so many problems to solve across so many dimensions: building a common integration backbone; choosing a common approach to APIs; requiring that all business software everywhere use these APIs, whether purchased, legacy, or homegrown. It's just too much for any one stakeholder -- or group of stakeholders -- to tackle single-handedly. Fortunately, there is an organization that comprises many stakeholders working on these problems. That organization is the Open Applications Group (OAGI), and it's actually making plug and play integration a near reality. A Constantly Evolving Model for Application Integration The mission of the OAGI, a non-profit consortium, is to promote easy and cost-effective integration of enterprise business application software

49 components. The OAGI does this by continuously developing a practical and implementable best practices model for business software interoperability. It also provides an impartial forum for industry stakeholders to learn, cooperate, and further improve the model. The model is based on an Open Applications Group Integration Specification (OAGIS), which includes a broad set of Extensible Markup Language (XML) schemas, first published in 1996, for sharing business information. Today, OAGI is the largest publisher of XML-based content for business software interoperability in the world. Its members have all participated extensively in building this industry consensus-based framework for business software application interoperability. They have also developed a repeatable process for quickly developing high-quality business content and XML representations of that content. Taming the Model's Complexity Since its inception in 1995, OAGI has developed a track record for innovation as it continues to extend the specification's reach well beyond traditional Enterprise Resource Planning (ERP) concerns. Today, it has teams working on specific application integration domains such as Manufacturing, Order Management, e-catalog, RFQ, and Quote. Currently, the OAGI offers 170-plus downloadable transactions, and the number increases monthly. Of course, along with this growth in application integration domains has come a commensurate increase in the OAGIS's complexity. By early 2000, the specification consisted of well over 5,000 detailed pages within OAGI members include prominent business software vendors, EAI vendors, systems integrators, and enduser associated organizations around the world: Active Software Agile Software Ariba AT&T Wireless Atofina Bluestone Software Boeing Candle Canopy CIMLINC Compaq Compuware Cyclone Commerce DHL Digital Paper Corporation Electron Economy EntComm epropose excelon Extricity Software Ford Future Three Corporation GloTech Great Plains HMS I2 ibaset IBM irista, Inc. (formerly HK Systems) ISSG J.D. Edwards Killdara Lockheed Martin Lucent Technologies Mega Intl Mercator Microsoft NEC Net Commerce Corp. Netfish Technologies Netonomy NexPrise NextSet Software, Inc. ObjectSpace, Inc. OnDisplay Optio Software Inc. Oracle PaperExchange.com

50 150-plus Microsoft Word documents. This complexity, the OAGI acknowledged, was becoming unmanageable. The organization had created correlating XML DTD's (Document Type Definitions), which the W3C (World-Wide Web Consortium) recommends as a way to define XML documents. Maintaining consistency when combining these documents was increasingly difficult, especially when changes and updates were needed across the specification. So the OAGI formed a team to explore the idea of using the Universal Modeling Language (UML) to describe and document the OAGI specification. They were charged with evaluating the UML's ability to model XML messages and generate OAGI's XMI (XML Metadata Interchange) format for representing the model. PeopleSoft PricewaterhouseCoopers PSDI Requisite Technology Robocom Systems SAGA Software Sand Hill Systems SAP Silverstream SoftQuad Software Software Technologies Corp. StreamServe, Inc. SynQuest, Inc. Teklogix Tilion, Inc. TradeAccess Trilogy Unilever PLC USData Viewlocity webmethods Wonderware XML Global Technology XMLSolutions "Our modeling team began using and evaluating a number of UML tools in XML development," said David Connelly, the organization's CEO. "After careful and thorough evaluation, the working team chose Rational Rose as the product that supported our repeatable process for quickly developing high quality business content and XML representations of that content." A Comprehensive Visual Model The team's goal was to use Rational Rose to model the OAGI's defined scenarios and supporting specifications associated with planning, managing, and executing the business functions of an enterprise. Supporting databases and spreadsheets would not be part of the modeling effort, which would focus on content (not technology) in business areas such as General Ledger, Payroll, Inventory, Purchasing, Customer Order Management, and Production. The scope of this project included: Integration of enterprise business software applications with extraenterprise systems; Integration between enterprise business software applications; Integration of enterprise business software applications with enterprise execution systems. Once the modeling project was completed, OAGI members, customers, and suppliers would be able to more effectively, easily, and quickly realize

51 the benefits of the OAGI best practices-based model for interoperability. The OAGI is building a content-based virtual business object model that enables an enterprise business application to build a virtual object wrapper around itself through the use of OAGI compliant APIs. Interoperability is achieved with object-oriented advantages - - but without the requirement to implement a software application in a specific objectoriented technology. To achieve communication with a business software component in this model, events are communicated through the integration backbone in the form of an OAGI compliant Business Object Document (BOD) to a virtual object interface. The integration servers provide services such as publish and subscribe, request and reply, transport mechanisms, data mapping tools, integration routing, and logging capabilities. Delivering Results with Rational Rose The project started in March 2000, with a projected completion date in early This was a very ambitious schedule for a team comprised of members from seven different industries. Plus, for all participants, this effort was secondary to high-level job responsibilities within their respective companies. The following statements reflect team members' satisfaction in working with Rational Rose: Sherif Sirageldin, Senior Systems Engineer, CIMLINC, Inc.: When the project started, there was a concern as to what methodology we were going to use to model the Open Applications Group's XML Business Object Documents (BODs). CIMLINC had been using the Rational Unified Process for business and application modeling and was very pleased with the process and results. By including the modeling methodology for XML schema (developed by Grady Booch from Rational Software and Dr. Mathew Fuchs at CommerceOne), we felt we had hit on the right approach for the OAGI. It took approximately three weeks for the team to import the core elements into the Rational Rose model, followed by another six to seven weeks of getting the team's consensus to model the first specification for "purchase order," which is a

52 very complicated message. Once the foundation was created, it took less than 90 days to completely model the OAGI specifications. All together, it took from March to September 2000 to complete the modeling task. The key result is we have now Web-enabled the specification so members can review and use it more effectively. It's a single repository that is very easy to use. With a push of a button, we can produce everything needed from OAGIS documentation right down to the DTD. John Wunder, Global Combat Support Systems (GCSS) Architect, Lockheed/Martin: Throughout the development, Rational Rose allowed us to present the architecture formally throughout the model. Prior to UML we were using PowerPoint charts to communicate, and everyone had a different interpretation of what the architecture should look like. When developing the models, the Rational Rose 'Check Model' capability allowed us to check each other's work to make sure that our work was consistent. The work of each team easily plugged into each other's models. And because many of the members also use UML, it made it very easy for the work of our team to provide great extensibility to the OAGI to member companies. Kurt Kanaski, Distinguished Member of Technical Staff, Lucent Technologies: My role in the modeling project has been the 'Meta Model.' When the OAGI members designed the mechanism to define the APIs necessary to build their model, they had the foresight to determine that a fixed length mechanism was not flexible enough to accommodate the various needs of communicating between business software components. As a result of this thought process, the members built a self-describing mechanism called a Business Object Document. The Business Object Document (BOD) uses a concept called meta data to describe itself to other software components. The BOD itself is not an object. It is an application architecture that is used to convey the communication and the necessary data to carry out the requested business event. The meta data that enables the BOD to be self-describing is actually data that describes data and enables a flexible mechanism that will describe itself to another component and will ensure that only the information necessary for accomplishing the task is sent. This architecture provides a model that is faster to develop, easier to support, and ensures higher performance for the end user. The choice of Rational Rose has allowed us to model everything as one so we could derive all the artifacts from the same source

53 thus eliminating any inconsistencies. The reaction of the OAGI members has been extremely fantastic. Other team members cited additional benefits from using Rational Rose, including: Speed of development. Rose allowed the team to stay well ahead of the 2001 delivery schedule. A single repository for all information. Rather than 180 documents, there is now one Rose model. At the push of a button, members are able to download from the Web all the documents and DTDs. Dramatically improved configuration management. If a change takes place in a model or document, the change is automatically made throughout the model to both Word and DTD documents with the lowest level data type definitions for the XML. The ability to click anywhere on the model and find the documentation. This eliminates time-consuming searches through individual documents. A dynamic communications vehicle that creates seamless views throughout a project lifecycle. Once requirements have been gathered and documented, Rational Rose becomes a repository for all facets of design information and the sole means by which code structure is generated, including automated testing procedures. An architecture that can be used over and over for similar problems. The details may change, but the architecture stands up. The ability to dramatically extend members' development capabilities by tapping into each others' analysis and design models that are based upon OAGI specifications. An easy and seamless way to add new members who bring additional content to the model and to easily revise the model in accordance with member feedback. Overall, using Rational Rose as a visual modeling tool enabled the OAGI team to develop a compelling best practices model that has dramatically lowered members' business software deployment and maintenance costs and increased their agility for competing in a faster-paced business world. Because of the organization's commitment to ongoing research and development, they will continue to lead the way in best approaches to business software integration and in helping enterprises achieve the dream of "plug and play" interoperability. For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you!

54 Copyright Rational Software 2001 Privacy/Legal Information

55 New Solutions for Rational Product Questions Are Just a Click Away by Jason Ross Rational Software How do you pass your time while waiting on a hotline to talk with a technical expert? If, like many, you surf the Web, then the answer to your problem may be just a mouse click away. Rational's searchable Technical Notes section at now offers thousands of helpful tips on how to resolve common issues. Topics are broken down by product line and fully searchable. Technical Notes are written on an as-needed basis by the same support professionals who manage the Rational help desk, to more fully document a Rational product or provide more information on common error messages, for example. These brief documents provide a more dynamic way for Rational customers to obtain information on patches, support for new operating systems, and solutions for issues that may be beyond the scope of our standard documentation. So the next time you're stumped by a technical issue and need help from Rational, you can still call our tech support line. But first, you may just want to drop by Rational.com's Technical Notes section and try clicking your way to a solution! For some terrific samples, click on the etech column and find the new Tech Tips section in this issue of The Rational Edge, or go directly to the Technical Notes section of Rational's website. For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you! Copyright Rational Software 2001 Privacy/Legal Information

56 Enterprise Java and Rational Rose -- Part I by Khawar Ahmed Technical Marketing Engineer Rational Software Loïc Julien Software Engineer Rational Software "We believe that the Enterprise JavaBeans component model and associated Java 2 Enterprise Edition standards will dominate the application server market and drive a potential market growth rate of almost 180 percent yearto-year." Mike Gilpin, Giga Information Group In this two-part series, we explore the synergistic relationship between Java 2 Enterprise Edition (J2EE), a popular development platform for distributed enterprise applications, and Rational Rose, the industry's leading CASE modeling tool. Part I provides an introduction to J2EE from an architectural perspective. Part II will begin by explaining how Servlets and JSPs work within the J2EE architecture and then go on to show how Rational Rose can help developers build better J2EE applications. With the advent of a "the network is the computer" application paradigm, the popularity of distributed applications has grown dramatically, and J2EE has quickly become one of the dominant distributed applications environments. If its adoption rate to date is any indication, J2EE, which includes the Enterprise JavaBeans (EJBs) technology, should prove to be a promising means for achieving component-based software development. Of course, component-based development will be effective only if it can be used to build distributed systems with contemporary n-tier application architectures, and if those systems are portable across different hardware and runtime environments: servers, underlying databases, etc. We believe that using Rational Rose greatly simplifies the task of developing J2EE applications.

57 J2EE: Responding to Enterprise Application Needs Enterprise applications have grown more complex over the past several years, reflecting evolutionary changes in software technology. Let's start off by looking at some common characteristics of these applications: Enterprise applications are often mission critical, meaning they have a direct impact on the bottom line. Think about businesses like EBay or Etrade: their Web site IS their business! Enterprise applications are often distributed; they are deployed either on multiple machines in the same general area or on geographically separated machines. These days, the typical distribution vehicle is an intranet or the Internet. Enterprise applications typically require the ability to handle a large number of users (or flexibility to expand quickly should the need arise). Enterprise applications require certain services, such as security (to prevent unauthorized access) and complex transaction processing (an Internet bank would require both withdrawal and deposit transactions to complete a fund transfer transaction, for example), as well as database access. Enterprise applications often have a large user base. System administrators should be able to upgrade, maintain, and redeploy the application with minimal effort. These are exactly the kinds of challenges J2EE is designed to address. Every new technology is created in response to certain needs, and J2EE is no exception. Essentially, it is a unified release of various Java specifications developed and popularized by Sun over the last few years. The J2EE specification focuses on two basic categories: technology and API specifications. The technologies within J2EE address server-side development needs. These include: Enterprise JavaBeans, which are used for building components that live on the server. Java Servlets, which provide the means for interactions with Web clients. JavaServer Pages, which allow developers to create dynamic content for thin clients. The primary purpose of the J2EE APIs is to enable developers to write J2EE applications in a vendor independent, portable fashion. The following J2EE APIs are available: J2EE connector -- to link an enterprise application to one or many EISs (Enterprise Information Systems).

58 JDBC Standard Extension -- to link an application to a relational database. Java Message Service (JMS) -- to bring the power of messaging to an application. Java Transaction API (JTA) -- for transactional enterprise applications. JavaMail -- to bring a mail mechanism to enterprise applications. JavaBeans Activation Framework (JAF) -- to be used within JavaMail. Java API for XML Parsing (JAXP) -- for any XML-based enterprise application. Java Authentication and Authorization Service (JAAS) -- to bring a security layer to enterprise applications. What really makes the J2EE packaging work are the new things Sun Microsystems has added: An Application Model. This development guide helps you understand how to use the various pieces of J2EE. A standard platform for enterprise application development. A compatibility test suite. Vendors of J2EE products use this suite to test their products for J2EE compliance. A reference implementation. This is an implementation of the platform described above. It gives you an operational perspective of J2EE: something you can actually run and see in action. It's great for hands-on learning about J2EE, demonstrating J2EE capabilities, and so on. Simply put, J2EE makes it easier for mere mortals to write some very complex software. Essentially, you don't need to be an expert in distribution issues, scalability issues, and all the nuances of security. That is all built into the specification. Vendors of J2EE runtime environments provide the underlying technology for you to use out of the box. To a large degree, you can mix and match tools, vendors, and technologies. That means you are not hostage to a single vendor. Plus, you can leverage existing investments instead of starting from scratch when you make an infrastructure change. Another advantage of J2EE is that it decouples application development from deployment and execution. You can defer the details of deployment to the Deployer, a new role defined by the J2EE specification. The deployer specializes in, and is responsible for, deploying J2EE software to specific servers. Separating this function allows the developer, or Application Component Provider, as J2EE refers to the role, to create a generic application. The deployer is then free to customize the application for the target execution environment, in accordance with what database will be used, who is allowed to access the application, etc.

59 Finally, using J2EE makes it easier for a developer to build, maintain, and update an enterprise application. It allows you to use third party or Common Off The Shelf (COTS) components in your application. If something changes, then you can modify just that specific component rather than the whole application, and so on. To do the work of putting together components from different sources, J2EE defines another new role: an Application Assembler. Figure 1 shows the relationship among the different roles that J2EE specifies. Figure 1: The Relationship Among J2EE Roles (View full size graphic in new window) In a nutshell, J2EE brings together the pieces and players required for building scalable, distributed systems, and provides a comprehensive platform for building enterprise applications in Java.

60 J2EE and Other Java Platforms You are probably wondering if there's a relationship between J2EE and the other Java platforms. In fact, there is! Sun has defined three platforms, all derived from the core Java technologies. They are targeted to three specific domains in order to provide specialized solutions for each: The consumer domain and the embedded market. The "general" application domain that uses core Java technologies. The enterprise application domain and the e-business market. J2EE and Multi-Tier Architecture An enterprise application may (but does not necessarily) consist of several tiers. Tiers are primarily abstractions to help us understand the architecture. The J2EE architecture usually involves four distinct tiers, as shown in Figure 2. Figure 2: Multi-Tier Architecture Say you are doing some shopping on the Net. Your browser is in the Client Tier displaying applets, HTML, etc. When you press "Submit," it invokes a servlet to run on the Web server, which resides in the Web Tier. The servlet may need to get some data via an EJB residing on the App Server in the Business Tier. The EJB may then need to access a database and retrieve the information you want in the Enterprise Information Systems (EIS Tier).

61 More precisely, the tiers are distinguished as follows: The Client Tier -- The Client Tier provides for the interaction between the Web application and the end users, typically through a thin client such as a browser. The technologies involved in this configuration are D/HTML, XML, XSL, Java Applet, etc. A client may also be an "application-based" client that connects to an Enterprise Information System client. Such clients are commonly referred to as think clients. The Web Tier -- The Web Tier is the interface between the end user and the business logic of your application. By separating the presentation logic from the business logic in this fashion, you can update the look and feel of your application without any modification to the business logic itself. This also allows you to have a throw-away facade that lets you stay in sync with the latest Internet technologies. At this level, you typically find the JSPs (Java Server Pages) and Java Servlets technologies, as well as use of XML, XSL, HTML, DHTML, GIF images, JPEG images, etc. The 1.2 specification of J2EE also introduced the notion of a Web Application, which means that your Web-tier application can be packaged in a Java Archive called a Web Archive (.war file). The Business Tier -- This is where you implement the business logic, that is, the actions that make up your application. These actions are encapsulated within components called Enterprise JavaBeans (EJBs). By far the most popular technology of the J2EE family, the Enterprise JavaBeans architecture brings to your application all the system-level services it might require, such as transactions, security, persistence, or multi-threading. These aspects of EJBs are handled by the EJB container, which we will discuss shortly. The EIS Tier -- In this tier, you provide persistent storage for the resources required by your application. Although an application does not have to have all these tiers as independent entities, it helps to conceptualize an application component as belonging to a specific tier so you can structure it appropriately. Such an approach is recommended for achieving a sound architecture. The J2EE Conceptual Model Let's now visualize J2EE as an onion with several layers of skin. The outermost layer is the server software, which enables application software to run on the physical hardware. The container is the next layer in. It provides generic services and hosting for the enterprise application.

62 The enterprise application consists of EJBs, servlets, and JSPs. Each container provides services to the EJBs and servlets it is hosting. These services are provided via the J2EE APIs, as specified by the J2EE specification. To understand the container model within a more familiar context, consider this. If you, as a developer, want to access a relational database, you probably don't really want to know what kind of caching architecture the database provider is using. Nor is there likely a strong desire on your part to implement all the access details. Given that all databases are conceptually similar, what you need is an abstraction for the database that easily provides access to the data and associated services. A container that holds your data with a set of services is that abstraction. The J2EE simply extends this container/data architecture to the enterprise application domain. Figure 3 shows the J2EE Conceptual Model. Figure 3: J2EE Conceptual Model The J2EE platform defines four containers: a Client Container and an Applet Container (Client Tier), a Servlet and JSP Container (Web Tier) and an EJB Container (Business Tier). These containers provide deployment and runtime support for the associated tier components. A container is located within a server. The relationship between a server and the containers within it is illustrated in Figure 4.

63 Figure 4: Relationship Between a Server and the Containers Within It What Are Enterprise JavaBeans? The primary purpose of Enterprise JavaBeans (EJBs) is to simplify the development of business logic. EJBs, which are non-visible, server-side beans, fulfill this purpose by specifying a general, server-side framework for building distributed and secure components, which support transactions out of the box. So, you may ask, who provides all this distribution and security infrastructure? The answer is vendors such as IBM and BEA. When vendors indicate their compliance with the J2EE EJB specification, they are signaling that they conform to the specifications. EJBs fall into two very distinct categories: Session beans. Think of these as beans that implement workflows or processes (for example: making a hotel reservation, transferring funds from one account to another, etc). These are by nature transient activities. Once the task is complete, the bean has no reason to exist. Entity beans. These are object-oriented representations of persistent data residing in relational databases (for example: a representation of hotels in Seattle, a business's customers, your bank accounts). Each EJB has a remote interface as well as a home interface, as shown in Figure 5. EJBObject and EJBHome implement these interfaces, respectively. The client never actually interacts with the bean directly; instead it calls methods on some objects that you, as the bean creator, had nothing to do with.

64 Figure 5: The Enterprise JavaBean Interfaces The primary purpose of the home interface is to provide a "factory" interface for the EJBs. Clients use the home interface to address questions such as how to locate an EJB, how to create an EJB, or how to delete an EJB. For example, a client creates an EJB by using the home interface's create methods. Remember that these are methods that the developer of the EJB provided. The create() methods, of course, are not implemented by the EJB. Instead, they simply correspond to "ejbcreate" methods in the EJB itself, which get called in response to a create method call. The remote interface covers everything else. It is implemented by the EJBObject, which, in essence, wraps the EJB and knows all about networking, transactions, security, and so on. EJBObject, through the remote interface, exposes the business methods implemented by the bean and delegates them when the business methods are invoked. The EJBObject provides other services, such as a way to test whether two EJBObjects are identical, identify the corresponding home object for the EJBObject, etc. What is the upside of such a complex architecture? Well, for one thing, it means that no matter where the bean is located, the client doesn't need to do anything differently. So you can easily change things without breaking the application. Second, it enables the container to intercept the requests so that it can provide all those services that you get from your runtime environment. Do you want only a certain group of people to access a specific business method? No problem. If the deployer specifies security attributes at deployment time, then, when the method is invoked, the container can intercept it to make sure only the authorized people are trying to access it. The same is true for transactions, persistence, etc. Session Beans

65 As we mentioned earlier, a session bean exists to carry out a specific task on your behalf. Think of it as an extension of a client program that executes on the server side. Now imagine that you are doing some bank transactions and using a session bean. Would it make sense for the bean also to perform tasks on behalf of another client who may be online at the same time? Obviously not, because your account information, etc., is specific to you. So session beans are typically private and cannot be shared. In essence, there is an ongoing interaction strictly between you and the session bean, and it maintains what is called "conversational state." Typically, there is no persistence associated with your session, meaning that if your session ends abruptly, it is usually not possible to recover and continue on. The EJB specification allows you to differentiate between the session beans as either stateless or stateful. Stateless session beans typically carry out "atomic" operations. That is, the bean is asked to do something, and once the bean has fulfilled that request, the conversation between the client and the bean is over. A good example is a bean that implements a credit card authorization. You enter a number, the bean obtains the authorization, and it's done. Another party could then request another credit card authorization, and a new session could be started using the same bean. In other words, the container does not save any value for the bean attributes during the session. Stateful session beans are useful for more complex activities. These beans remember things from one method call to another, so you could call a bean repeatedly and continue your "conversation" or session. For example, if you were to go shopping on the Net and use a shopping cart to keep track of your purchases, a stateful session bean could represent that. Entity Beans The other type of EJB is called an entity bean. Entity beans were actually introduced in the first EJB spec, but the container providers were not required to support entity beans. That changed with the EJB 1.1 spec, and support for entity beans is now required for J2EE compliant application servers. Entity beans provide an object-oriented view of the persistent data in a database. Things such as customer, employee, and account, as well as things such as banks, tickets, and reservations, all map nicely to entity beans, allowing you to work with objects rather than database records. There are many advantages to using entity beans. For one, you can simply call methods (e.g., mybean.setdestination()) instead of dealing with obscure SQL queries. In addition, objectification allows you to reuse the entity bean concept throughout your system consistently.

66 Since an entity bean refers to database records, it needs to be uniquely identified. That's why you have a primary key for each entity bean. For example, for an employee entity bean, the primary key may be the employee ID. From a user perspective, all the details of database access, synchronization, and so on, are all taken care of, and things become much simpler. For instance, the container ensures that the same method is not called concurrently, unless explicitly specified by the bean provider. Since entity beans deal with databases, there has to be coordination between the two to keep things in sync. This process of coordination is referred to as "persistence." Two types of persistence schemes are available for entity beans.the simpler one is called Container Managed Persistence (CMP). This is attractive because, as a bean developer, you simply tell the container to take care of things. You can have it take care of business logic, specify how entity bean attributes map to fields in the database, and then sit back and relax while the deployment utility actually generates all the SQL calls at deployment time. The nice thing is that, by using CMP, your entity bean doesn't have to embed direct database queries. So it remains independent of the database, and hence easily portable. But say you want more control over how the persistence is handled (e.g., you want to do it more efficiently than the auto-generated code). In that case, you can specify Bean Managed Persistence (BMP) and write all the database access logic as part of your bean. Of course, the disadvantage of bean-managed persistence is that you now have to understand the database structure, know SQL, and do a lot more coding, too! Working with EJBs In order to create an EJB, the first thing you need to do is find its home object. You do this by using the "nickname" for the bean and querying the Java Naming and Directory Service (JNDI), such as Novell NDS, LDAP, etc. Once you have a reference to the home object, you can invoke a create method on it. When a create is invoked on the home interface: The EJBHome creates an instance of the EJBObject and gives it the bean instance of the appropriate type to work with. Once the bean instance is associated with the EJBObject, the instance's ejbcreate() method is called. For entity beans, if a matching record already exists in the database, then the entity bean is populated from it; otherwise, a new record is inserted into the database.

67 For session beans, the instance is simply initialized. Upon completion of the ejbcreate() method, EJBHome returns a remote reference for the EJBObject to the client. Remember that since we are dealing with remote objects, a remote reference is actually a stub. The client can now call business methods on the stub. The stub relays the methods to the EJBObject, which in turn delegates those methods to the bean instance (of the class we created), and the result is relayed back through the chain when the method returns This sequence of events is shown graphically in Figure 6. Figure 6: Using an EJB (View full size graphic in new window) Next Month: Servlets, JSPs, and Building Better J2EE Applications with Rational Rose We hope that we've given you a good basic understanding of the J2EE architecture and its benefits, as we conclude Part I of this article with an invitation to join us next month. In Part II, we will first take a look at how servlets and JSPs work within the J2EE architecture. Then, we will discuss in detail some of the ways you can harness the visual modeling power of

68 Rational Rose to build better J2EE applications. Please click here to go to Part II! For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you! Copyright Rational Software 2001 Privacy/Legal Information

69 A Simplified Approach to RUP by Gary K. Evans President, Evanetics, Inc. The demand to reduce complexity in object-oriented software development process and notation has become a continuing refrain. Participants in a recent forum at UML World 1 discussed the complexity of the Unified Modeling Language (UML) in its current form. Implicit in the experts' recommendations on ways to simplify the sprawling mass of UML (the 800- page Version 1.3 specification is available at was the acknowledgment that you cannot use UML effectively without a process to guide the modeler. Lightening Up a Heavyweight Although some companies are experimenting with emerging "lightweight" methodologies designed for use with UML, for most companies the Rational Unified Process (RUP) -- which has the appeal of name recognition and is certainly in no danger of being labeled "lightweight" -- is plenty new and different enough. It represents a significant departure from the waterfall approach still used by so many software companies. And for their development teams, RUP might be the most daring software process and management innovation they have attempted in years, if not decades. Often, the adoption is not painless. Even after consulting the instructional literature and reading through RUP files, many of my clients are uncertain about how to sequence the steps and unclear about how the steps are connected. What they most want to know is, "What are the minimum steps we must take to be successful with RUP?" The good news is that RUP really is a tailorable, fairly adaptable, and quite reasonable product for heavyweight companies to use in improving their software development process -- if they understand it. In fact, I wrote this article mainly to demystify the overwhelming mass of information in RUP. In addition, I've observed that companies often abandon an official

70 process in midstream because it's too difficult to describe and assimilate, relegating it to "shelfware" that sits unused in a manager's office. To save clients from this fate, I've posted a readable, usable, single 8.5" X 11" page,.pdf version of RUP -- a "mini" RUP, if you will -- on my Web site: Although this document presents a recipe for applying RUP, the really hard part of changing a corporate software process is not in following a checklist -- it's in changing people's mindset. The waterfall process is built around an (unfounded) optimism that it is possible to understand everything at each stage of the development process before moving to the next stage. Look at a waterfall project schedule, and you will immediately see that it has no provision for going back over previously charted ground. Iterative processes, however, are much more honest. We acknowledge within the process that we cannot understand everything at a point in time; rather, we accumulate this knowledge over time, continually revisiting areas we have been to before. Changing how we think about software development is orders of magnitude more difficult than changing how we do development. This article offers a small contribution toward this change. Understanding the Underlying Principles RUP is based on a few important philosophical principles: a software project team should plan ahead; it should know where it is going; it should capture project knowledge in a storable and extensible form. Since RUP was born at a software tools company, this emphasis is not a surprise. RUP also incorporates the concept of "best practices" for software engineering, defined by five major properties: Property for Best Practices Use case driven Architecture centric Iterative Incremental Controlled Description Guided by interactions between the system and its users. Founded on a defined architecture, with clear relationships among the architecture components. The problem and solution are organized into small pieces, so each iteration purposely addresses only one of those pieces. Each iteration builds incrementally on the foundation built in the previous iteration. Control with respect to process means you always know what to do next; control with respect to management means that all deliverables, artifacts, and code are under configuration management. A Manageable Iterative Development Process

71 This section lays out the minimal steps for an iterative software development process that I follow in my projects with clients in diverse industries. The description assumes you are familiar with core UML elements such as sequence diagrams and use cases. Iteration Zero: Getting Started Iteration Zero, which takes place before development begins, provides an opportunity to explore the breadth and depth of the system's requirements and goals. I usually tell my clients that we will not actually begin building the system until Iteration Three or Four, after we have proven our ideas about appropriate architecture as well as our understanding of the system's business model. Steps 1. Identify the most significant or visible functional and nonfunctional requirements for the system. 2. Identify classes that are part of the domain being modeled. 3. Define the responsibilities and relationships for each class in the domain. 4. Construct the domain class diagram. Explanations Express the user-visible functional requirements as use cases or scenarios. Use cases alone, however, are not enough. Capture non-functional requirements in a standard paragraphstyle document (or family of documents). 2 Discovering classes from the requirements artifacts can be done in several ways -- Class/Responsibility/Collaborator (CRC) cards, data mining, prior domain knowledge, or searching requirements documents for nouns and noun phrases. Experiment with different approaches. In searching for your project's classes, a single technique is seldom sufficient. Responsibilities are the strategic goals of a class. Responsibilities will determine the class's operations; the operations, in turn, determine the class's attributes (i.e., data). Do NOT start with data and try to derive class operations from that. That's called data modeling, not object modeling. The domain class diagram is just a page of class boxes with class names: no relationships, operations, or attributes yet. Together with the responsibility definitions, this diagram lays a foundation for a common vocabulary in the project.

72 5. Capture all identified use case and class definitions in an OO CASE tool. 6. Identify the major risk factors, and prioritize the most architecturally significant use cases and scenarios. 7. Partition the major use cases/scenarios across the planned iterations. 8. Develop an Iteration Plan describing each "mini-project" to be completed in an iteration. You must use a CASE tool for all but the most trivial projects. If you don't use one, you either won't do the models you need to do or you will be overwhelmed by the mass and volatility of the artifacts you produce. 3 I cannot emphasize this enough. Years ago when I worked with a major computer manufacturer, they did a study on all the corporate projects that were more than 50% over budget or 50% over scheduled time. The major causes of these delays were all related to ignoring risk, or assuming that risk would go away. It doesn't. Address the highest risk aspects of your system immediately in early iterations, simultaneously with the most architecturally significant elements. This is absolutely imperative. Don't be tempted to pick the "low hanging fruit" and allow the risks to ripen into major problems. Don't try to embrace every use case or every detail in the beginning. Be selective; focus on those 20% of use cases that give you 80% coverage of your major system functions. During the iterations you can add in the less significant use cases. Decide if you will use a time-box partitioning (fix the iteration duration and select enough use case functionality to fit into that time frame) or scenario partitioning (fix the use case functionality and determine the time needed to carry that functionality to code). For each iteration, define the goals, staffing needs, schedule, risks, inputs, and deliverables. Keep the iterations focused and limited (I prefer 3-4 weeks per iteration). Strive to make each iteration a "mini-waterfall" project that enables you to "eat the elephant one bite at a time!" Each iteration description should cover all the software activities in the process: requirements, analysis, design, implementation and test. In addition, each iteration will involve QA, Engineering, Product Management, and other departments -- and each iteration will produce an executable. Iterative

73 development is a "divide and conquer" strategy. If done properly, it allows you to know within days (or maybe weeks) if you are getting off schedule. Development: Ten Steps for Subsequent Iterations In Iteration Zero we have only surveyed the overall size and most prominent characteristics of our project. In RUP this iteration occurs within the Inception phase, when we are trying to reach a "go/no-go" decision on the project. At the end of Iteration Zero we should have a foundation from which we can generate more detail about what our system should do (this is analysis thinking), and how our system will achieve these goals (this is design thinking). In RUP we generate this detail iteratively, which means that we do the following ten steps six times if we have defined six iterations for our project. Steps 1. Merge the functional flow in the use cases and scenarios with the classes in the domain class diagram by constructing analysis-level interaction diagrams (i.e., sequence diagrams or collaboration diagrams) for each scenario in the iteration. 2. Test and challenge the analysis-level sequence diagrams on paper, a whiteboard, or a workstation screen. 3. Develop analysis-level statechart diagrams for each class with "significant" state. Explanations The major benefit of dynamic models is that they allow you to discover class operations, which meet the business requirements in your requirements artifacts. If you construct only static models (i.e., class diagrams or object diagrams), then you will have a high risk of failure. Consider a still photograph versus a video -- each has unique properties, and each presents a distinctly different perspective. Follow the flow of messages in the interaction diagrams. Update the class diagram with the operations and data you discover from the interaction diagrams. Don't be concerned about performance or technology issues in this analysis phase. In analysis, we explore problems; in design, we define our solution. Try to keep these phases segregated in your thinking. Statecharts describe the "statespace" for a single class. Sequence diagrams describe the messages sent among multiple objects to carry out the work of the use cases.

74 4. Enhance sequence diagrams and statechart diagrams with design-level content. 5. Challenge the designlevel sequence diagrams and statechart diagrams on paper, discovering additional operations and data assigned to classes; update the class diagram with these operations and data. 6. Update the OO CASE tool information and redistribute to the project team. 7. Develop code for the use cases/scenarios in the current iteration from the current diagrams. 8. Test the code in the current iteration. 9. Conduct an Iteration Review. In design thinking we identify and add to the class diagram and sequence diagrams any required support or design classes (e.g., collection classes, GUI and other technology classes, datatypes, etc.). In the design activity of each iteration we take performance and technical considerations into account. Design-level artifacts include actual function names and arguments, and actual datatypes and return types. Update all of the artifacts and diagrams. Add or modify classes as necessary. Re-publish system reports for team members. (Documentation should never be more than 24 hours old!) Remember, design is not coding. Design is the necessary preparation for writing code. Testing is continual: test on the largest (project) level as well as on the smallest (unit) level. (For an excellent discussion on this topic, see Scott Ambler's Building Object Applications That Work, Cambridge University Press/SIGS Books, 1998.) Focus your review on essentials: ask what went right and what went wrong in the iteration. Did you achieve all of the goals listed in your Iteration Plan? Did the iteration take longer than planned? Did you fail to add planned functionality? For any area that did not succeed, ruthlessly determine the root causes of the failure and fix them. Then determine if you can still follow your current project and iteration plans -- if not, then change

75 them, too. Nothing is sacred. If you are not willing to change the remainder of the schedule or iteration content, then you are still in the waterfall mindset. 10. Conduct the next iteration (i.e., repeat steps 1-9), adding in the next set of use cases/scenarios. Continue conducting iterations until the system is completely built. This is the essence of an iterative, incremental process: do again what you just did (iterate), building incrementally on the base of the previous iteration. Moving Forward Is this a complete and sufficient list of activities to perform in a minimal version of RUP? Not at all, but it is a manageable roadmap to help you get comfortable with the essential aspects of RUP and UML. As your comfort level grows, you can add or delete artifacts or process steps as you see the need. Will following this mini-rup guarantee project success? No, but it should be a useful guide if you are moving to RUP. The basic philosophy is pretty simple: know where you are going, and "eat the elephant one bite at a time." And most important, never, never lose sight of this rule: the process is here to serve you; you are not here to serve the process. 1 "Defining the UML Kernel," Software Development Magazine, October For some excellent tips on discovering use cases and writing use case descriptions, see 3 See "Do I really need an OO CASE Tool?" at for a brief discussion of what to look for in an OO CASE tool. For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you! Copyright Rational Software 2001 Privacy/Legal Information

76 Pattern-Oriented Development with Rational Rose Professor Peter Forbrig, Department of Computer Science, University of Rostock, Germany; Dr. Ralf Laemmel, Department of Information Management and Software Engineering, Free University, Amsterdam; and Danko Mannhaupt, Software Engineer, Software Design & Management (SD&M), Munich, Germany A design pattern describes a solution to a recurring problem in a systematic and general way, and design patterns are an accepted means of representing a communication experience in software design. Up until now, however, only single patterns have been used in case tools; there has been no support for combining patterns. This article will show how patterns can be combined in Rational Rose to develop new patterns. It will also show how the entire software specification process can be based on a combination of patterns. The ideas in this article are based on a pattern-oriented programming model developed in a master's thesis by Normen Seemann in 1999 at the University of Rostock's Department of Computer Science. This model led to the development and implementation (by Stefan Buennig and others in the department) of a Pattern-oriented Language (PaL) and PaL-based graphical editor. Based on this work, Danko Mannhaupt recently developed a set of scripts for Rational Rose that supports most features of the pattern-oriented programming model to accomplish object-oriented design. Limitations of Object-Oriented Specifications Object-oriented specifications are very successful because they make it possible to reuse existing models. But they have their limitations. Let's say we have two classes, A and B, along with their methods ma and mb. Class B inherits from class A, so that B has two methods: mb and ma, inherited from A. The corresponding class diagram is shown in Figure 1.

77 Now, suppose we have two other classes: AS with method mas, and BS with method mbs. Theoretically, the scheme of inheritance from A to B should be reusable for these two classes. One arrangement would be for AS to inherit from A and BS to inherit from B. Figure 2 shows the corresponding class diagram. Figure 1: A Class Diagram for a Simple Inheritance But the problem is that now, BS does not inherit from AS. That inheritance relationship could be inserted manually, but that would create a multiple inheritance problem. In this article, we explain how to tackle this problem by using Rational Rose scripts. First, however, it is important to understand our programming model. Figure 2: Attempt to Reuse the Inheritance Structure of Figure 1 The Extended Object-Oriented Programming Model In the traditional programming model, patterns must be coded as conglomerations of classes. This results in both a lack of traceability and encapsulation. Patterns should be traceable in the source text so that the reader can easily identify them. E. E. Jacobsen argues that patterns are abstractions operating on top of programs. The new programming model provides a corresponding kind of abstraction to protect the overall structure underlying a pattern. It treats a pattern as a nested class that encapsulates participating classes. In the object-oriented world, reuse is based on concepts such as inheritance, composition, genericity, and interfaces. These concepts, however, are not sufficient to implement patterns in a reusable way. Instead, programmers are forced to code the solution a pattern provides for each specific application context. The simple example in Figure 2 illustrates the limitations of inheritance when it comes to facilitating reuse of class structures; the original inheritance relationship is not preserved in the refined class system (AS

78 and BS). Among other problems, the refined system cannot support polymorphism sufficiently. Also, generic classes do not provide a general solution for refinements of class structures. The duplication of classes participating in a pattern cannot be modelled within that framework, but we are able to do so within ours. Figure 3 shows two patterns that introduce superimposition on class structures as another form of reuse. This is "grey-box" reuse rather than white-box reuse by inheritance, which works at the class level. In graphical notation, superimposition can be depicted with arrows. Entities (e.g., class structures, participating classes, or methods) are connected with dotted lines. Arrows, which go from the superimposed class structure to the resulting class structure, are used only if the classes or methods are renamed. If the class or method names remain the same, then the arrows are omitted. Note that the graphical notation does not emphasize parts of the resulting structure that are not provided by the reused class structures. Instead, it emphasizes the resulting structure and indicates the reused parts. Figure 3: Class Structures and Superimposition: Visitor and Composite (View full size graphic in new window) The right portion of Figure 3 illustrates the class structure underlying the Composite pattern and its derivation by superimposition. The final class structure, PCOMPOSITE, is a slight abstraction of the corresponding GoF variant. There is an abstract superclass COMPONENT with a method operation and the subclass LEAF and COMPOSITE. The latter class enforces an interface to add and remove components. COMPONENT declares an abstract method operation. The abstract class PARAMETER models parameters for operation.

79 Now, let us focus on superimposition. The class structure PCOMPOSITE was derived from the class structure (auxiliary patterns) PCONTAINER, modelling a minimal interface for container functionality, and PARAMETER, concerned with the idiom for methods with abstract parameters. Note that PCOMPOSITE adds structure to the reused class structures. LEAF was not present at all in PCONTAINER. Also, the inheritance relationship is established as required for composite. Finally, note that classes enclosed by PCONTAINER are renamed in PCOMPOSITE. The combination of patterns is an even more intricate problem than the reusable implementation of patterns. In the suggested model, combination is made possible by certain key features of superimposition that facilitate reuse of class structures. It is possible, for example, to unify or duplicate classes in a class structure and merge different class structures. It is impossible, however, to perform such adaptations in terms of genericity and inheritance without breaking the class structure. Integrating Patterns into Rational Rose With this understanding of our model, we can now explore how to integrate object-oriented design based on patterns with a CASE tool such as Rational Rose. Introducing a new static element To supplement the existing static elements in Rational Rose -- Package, Class and Interface -- let us add a new model element: Pattern. This element is a container for components, similar to a package. It has a name and may or may not be part of a package. Classes and interfaces can be pattern components. (Patterns can also be components inside of other patterns, but this would add unnecessary complexity to our model.) The pattern model offers combination as a means of teaming up patterns. Associations and inheritance relationships complete the static model of the new pattern element. The pattern specification includes a detailed description like those in pattern catalogs. Potential documentation fields are: motivation, application, domains, benefits, liabilities, consequences, examples, and related patterns. Rational Rose can represent patterns -- which are related to packages -- in much the same way as packages. Rose's model tree can include patterns as structural elements with components and relationships. Each pattern has at least one class diagram that shows the pattern structure. A new type of diagram, the Pattern Diagram, shows patterns and relationships between them. So Rational Rose displays both the development of the pattern model and the architecture of the system. Specifying dynamic aspects The specification of dynamic aspects of a pattern is very important from

80 the point of view of documenting. Therefore, it must be possible to create collaboration, sequence, and activity diagrams for a pattern and its components. These diagrams document co-operations and interactions among class-components. They support developers with detailed design specifications, because they describe responsibilities and interfaces of components. Furthermore, interaction diagrams simplify implementation of pattern classes with object-oriented programming languages. Working with Design Patterns in Rational Rose Creating patterns Patterns can be created within Rational Rose using the menu bar, the context menu of packages or patterns in the model tree view, or the diagram presentation. An empty design pattern without components is created at the current level -- that is, as part of the currently selected package. Editing patterns Pattern properties and components can be edited similarly to packages, using either the model browser or diagram views. A developer can initiate changes to the specification and documentation of the pattern itself by selecting a command from the pattern's context menu. This produces a dialog box that shows pattern properties. Pattern components are edited as if they were member classes of a package. Consequently, developers can add components by selecting a command from the menu bar or the pattern's context menu and edit these components using standard Rational Rose tools. They can also add and change attributes and operations and edit specifications and documentation. They can also add interaction diagrams that include pattern components and their instantiated objects. Interaction diagrams document a pattern's behavior in certain scenarios. Therefore, a number of different diagrams are required to describe a pattern in detail. Refining and combining patterns The refinement and combination process represents the core of the pattern model. The difference between refinement and combination is the number of source patterns involved in constructing a new design pattern. With refinement, one pattern is edited toward a particular domain or application. This can be achieved by editing a pattern's properties and adding or changing pattern components. Combination refers to the creation of a pattern based on a number of source patterns. Combination and self-combination (the same pattern occurs several times as a source pattern) represent a challenge to CASE tools. Essentially, a number of model elements have to be merged into one, and several requirements have to be taken into consideration. Components By default, the component set of the new pattern comprises the union of the component sets of all source patterns. It can be conceptualized as a "bag" that contains more than one exemplar of the same component if the designer has combined patterns with identical components.

81 The designer has several options to change the default component set-up. Components can be merged, but only if they are from different patterns. Typically, the number of combined components would be two or less, but in rare cases it may be necessary to merge more than two components. By default, a combined component contains all attributes and operations from its source components. In fact, however, although components in combined patterns can be joined, attributes and operations may or may not be joined. Let us suppose, for instance, that each of two source components has an attribute date. Only one attribute of this kind is necessary for the combined component. Deletion of properties would not be supported, however, because that would reduce functionality. Therefore, instead of deleting one attribute, the designer would join both attributes. That way, the specification and documentation of both source attributes would be sustained, and the semantics would remain intact. Associations of source components would also be preserved in the combined component. Once again, however, associations can be joined to remove redundancies. The same applies to inheritance relationships, but joining inheritance relationships can create multiple inheritance situations. A Prototype Implementation of the Pattern- Oriented Approach Rational Rose provides an extensible interface that allows users and developers to enhance its functionality. This interface provides access to model elements, so that existing objects can be manipulated or removed and new objects can be added. Access is provided via an ActiveX control or by using RoseScript, a Basic environment within Rose. At the University of Rostock, two scripts were developed to create a new pattern and to combine patterns. Both scripts can be executed manually, but they can also be integrated into the menu bars. A text menu file defines menu extensions. The menu file for the pattern extension is presented in Figure 4. It defines a new sub-menu -- "Pattern" -- for the tools menu in Rose. Figure 5 shows how this sub-menu actually appears in Rational Rose.

82 Menu Tools { } Separator Menu "Pattern" { } option "&New Pattern" { RoseScript $PATTERN_PATH\Scripts\new_pattern.ebs } option "&Combine Patterns" { RoseScript $PATTERN_PATH\Scripts\combine.ebs } Figure 4: Pattern Extension Menu File: pattern.mnu

83 Figure 5: Visual Representation of the Extended Sub-menu Tools in Rational Rose (View full size graphic in new window) Using Pattern as a Model Element Rational Rose does not support the sort of fundamental extensions that would permit us to implement a new model element with unique properties, so as an alternative, we used Package, a standard Rose element, as a container for pattern components. The ability of packages to function as containers for other elements also enables them to serve as pattern representations in Rational Rose. However, they do not make it possible to reference pattern components in other patterns. Nor is it possible to indicate that a pattern contains other patterns. To identify the new pattern packages as distinct from ordinary packages, we use UML stereotypes. Every pattern is marked with <<Pattern>>, and the stereotype was added to the standard stereotype list. Components of patterns are regular classes without either stereotypes or extensions. Therefore, whenever something is written about pattern components, it can be assumed that they contain all the properties and features of classes. The pattern interface is represented as a designated component of the pattern. It is marked with the stereotype <<PatternInterface>>. Like the pattern model, the interface contains public properties of the pattern and provides services to other patterns or independent classes. Properties are represented with attributes that often refer to an instance of one of the pattern components -- the starting point of a pattern structure. Documentation for patterns can be placed in standard documentation fields for packages. We recommend that the documentation text be structured according to standard pattern descriptions, stating the context, problem description, solution, consequences, and perhaps examples. The documentation can also include references to other patterns, keywords, benefits, and liabilities. With static and dynamic diagrams, designers can visualize pattern properties and behavior. Class diagrams are used to show components and their relationships, including associations and inheritance. The file PatternStereotypes.ini, for example, contains the following definition: [Stereotyped Items] Package:Pattern Class:PatternInterface

84 [Package:Pattern] Item=Package Stereotype=Pattern [Class:PatternInterface] Item=Class Stereotype=PatternInterface Manipulating Design Patterns To understand how to manipulate design patterns in Rational Rose, let us return to our first example: the simple reuse of an inheritance structure. First, we need to develop a pattern for the inheritance relationship, which should be reusable. Let us call this pattern InheritanceAtoB. Next we must specify a pattern that contains both classes AS and BS. We will call this pattern, which contains only these classes, PatternASandBS. We create these patterns the same way we ordinarily create class diagrams, which we will not describe in detail here. Figure 10 shows both patterns displayed in Rose. If we combine these patterns, we see the window in Figure 6. Figure 6: Screen Dump After Selecting a Pattern: PatternAtoB. (View full size graphic in new window) Existing patterns are displayed in the left box; we can click on one to

85 select it. Using the (>) button at the top, we can insert the selected pattern into the list of patterns in the Combined Patterns box, and the components are displayed below. If we select a component, we can see its attributes and methods. According to our theory, a pattern can have higher-level methods, which are represented by an interface class. Although these classes are used here, they have no importance for this example. Figure 7 shows a screen dump following the selection of PatternASandBS. The resulting combination pattern now bears the name ReusedInheritance. Figure 7: Screen Dump Following the Selection of a Second Pattern. (View full size graphic in new window) Now we have to describe which components are merged together. We click first on a component we select within the listbox and then click on a second component in a drop-down menu in a listbox. According to the standard, the first element

86 determines the name of the combination, but this can be changed manually. The procedure would be the same if attributes or methods of a component had to be merged. Once all necessary merges have been performed, we can press the OK button to generate the combined pattern. The first window to appear summarizes the combinations, as shown in Figure 8. Figure 8: Displaying the Results After Combining Two Patterns In our example, the generated class diagram for the new pattern is already perfect. No manual changes in the layout are necessary. Of course, this is not always the case, and manual changes are sometimes necessary. The class diagram, which can be found within the pattern (package) ReusedInheritance within the browser (see also left side of Figure 10), appears as shown in Figure 9. Figure 9: Class Diagram of the Resulting New Pattern Class AS plays the role of class A and BS the role of class B. These new classes reuse the inheritance structure of the given pattern and also inherit the methods of the corresponding classes.

87 Figure 10: Two Patterns that were Combined and the Resulting New Pattern (View full size graphic in new window) Conclusion In this article, we have shown that a pattern-oriented programming model tool can support pattern-oriented design, which, in turn, provides for the kind of reuse that an object-oriented approach cannot support. If you would like to try out the patterns approach yourself, you can download the scripts for Rational Rose that we used in our prototype implementation from the file RosePatterns.zip at The script new_pattern.ebs generates a new, empty pattern. Components can be inserted into this pattern in the traditional way that class diagrams are developed. The second script, combine.ebs, is used for combining existing patterns, as described in this article. A library of services used by the scripts is implemented by library.ebs, and a compiled version is library.ebx. The files pattern.mnu, pattern_addin.reg, and patternstereotypes.ini are necessary to register the stereotypes and new menu items. The file readme.txt describes what to do with these files.a library of patterns is also available as a Rose model.

88 References Bosch, J.: "Design Patterns & Frameworks: On the Issue of Language Support." In Bosch, J.; Hedin, G.; Koskomies, K. (Ed.), Proceedings, LSDF'97: Workshop on Language Support for Design Patterns and Object- Oriented Frameworks, Bosch, J.: "Design Patterns as Language Constructs." Journal of Object- Oriented Programming, 11(2): 18-32, May Budinsky, F.J.; Finnie, M.A.; Vlissides, J.M.; Yu, P. S.: "Automatic Code Generation from Design Patterns," IBM Systems Journal, 35(2), Buennig, S.: "Entwicklung einer Sprache zur Unterstuetzung von Design Patterns und Implementierung eines dazugehoerigen Compilers." Master's thesis, University of Rostock, Department of Computer Science, Buennig, S.; Forbrig, P.; Laemmel, R.; Seemann, N.: "A Programming Language for Design Patterns." Informatik 99, Reihe Informatik aktuell, Springer, Presented at Arbeitstagung Programmiersprachen'99. Forbrig, P.; Laemmel, R.: "Programming with Patterns." TOOLS 2000, Santa Barbara, California. Proceedings, Tools 34-USA 2000, IEEE Computer Society. Gamma, E.; Helm, R.; Johnson, S.; Vlissides, J.: Design Patterns: Elements of Reusable Object-Oriented Software. Addison Wesley, Jacobsen, E.E.: "Design Patterns as Program Extracts. In Bosch, J.; Hedin, G.; Koskomies, K. (Ed.), Proceedings, LSDF'97, Workshop on Language Support for Design Patterns and Object-Oriented Frameworks, Mannhaupt, D.: "Integration of Design Patterns into Object-Oriented Design Using Rational Rose." Master's thesis, University of Rostock, Department of Computer Science, Palsberg, J.; Schwartzbach, M.I.: "Type Substitution for Object-oriented Programming." SIGPLAN Notices, 25(10), October Proceedings, OOPSLA/ECOOP'90 (European Conference on Object-Oriented Programming. Pulsipher, D.: "Defining and Using Design Patterns in Rational Rose." July, Seemann, N.: "A Design Pattern Oriented Programming Environment." Master's thesis, University of Rostock, Department of Computer Science, 1999.

89 For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you! Copyright Rational Software 2001 Privacy/Legal Information

90 Memory Profiling in Java by Goran Begic Senior Technical Support Engineer Rational Software B.V. The Netherlands For a long time, my understanding of Java was that it is perfectly served by an automatic garbage collection system that removes "all" the obsolete memory left behind during the program execution. My private little Java world was a happy place. In fact, I believe this is one of the first things I was told about Java: "You do not have to worry about freeing memory; the garbage collector does it for you." Well, there is always a potential problem if something does everything for you and you do not really know how it does it, or what else it does for you. Don't you agree? Indeed, Java garbage collection is not perfect, and the better we understand how it works, the better we will be able to build larger, more scalable applications. Java is a programming language that has developed far beyond what it was expected to be in its early days. As we try to build more and more complex Java applications, the performance optimization of the code becomes much more important and warrants our close attention. Modern JVM (Java Virtual Machine) and JIT (Just In Time) compilers are built to give more speed to Java code. However, the key to the best code is still in your hands. There are two aspects of Java code profiling -- execution time analysis and memory usage analysis. Why memory analysis, you may ask? The memory management of Java Virtual Machine can cost a significant amount of time. Keeping its good and bad sides in mind can help you increase the overall performance of a Java application. Memory usage analysis is very important in optimizing the performance of

91 Java applications. The fact that Java programs are executed via the Java Virtual Machine that does all the memory allocation and de-allocation has nothing to do with the performance of the application. Excessive memory usage can slow down your code, and as the complexity of your program increases, tuning memory usage can help increase overall performance. Recently, I was browsing the Usenet for interesting postings about memory leaks in Java, and the resulting list was quite a long one. It does not actually come as surprise -- Java memory problems are real and they need to be taken seriously. Here are some quotes from the postings that describe the need for memory profiling: First, our applications seem to run fine, but then (about half an hour after starting the server) we get an outofmemoryexception. Does anyone have an idea what might cause this error??? I found that the page swap of Java process which runs my Java program is increased as times going on. What's wrong??? What do you think about it? I think some memory leak or paging swap is slow... Some Background on Java Memory Management Java does not allow programs to contain pointers to physical memory. When a Java application needs to allocate an object in memory, the JVM returns a reference to the allocated memory area. At the same time the JVM updates a directed graph to the objects allocated in memory. The objects on the graph are commonly referred to as "nodes" and the references are called "edges." A directed graph holds the information about all the objects in memory as they get allocated during the run. The garbage collector uses this information to clean the unused memory. Objects on the heap can have one of the three following states -- reachable, resurrectable, and unreachable. Reachable objects are visible to the garbage collector; they are on the directed graph. As long as an object is reachable it stays in memory and the garbage collector will not attempt to clean it. Resurrectable objects are not visible on the graph of nodes and edges, but they may become reachable after the garbage collector executes the finalize() method on some other objects. Finally, unreachable objects are first candidates to be garbage-collected. The garbage collector is the Java Virtual Machine component charged with the task of managing memory allocation on the heap. There are several different types of garbage collectors, but most of them use the method of tracing the graph of objects starting with the root nodes. The basic tracing algorithm is called "mark and sweep." In the mark phase, the garbage collector parses the tree of references and marks the reachable nodes (objects). In the second phase, it frees the unmarked objects.

92 In addition to the process of marking and sweeping, the garbage collector must deal with heap fragmentation, which obviously can be significant. This presents an even more complicated task than checking for objects that are no longer in use. The garbage collector needs to move the objects on the fly in order to reduce heap fragmentation. During this process, the live objects are pushed over the free memory area towards one end of the heap. The significant detail is that even modern Java Virtual Machines support only single threaded garbage collectors. This means that even if you run your Java program on multiprocessor machines, all other activities must wait while the garbage collector frees the unreachable objects and while it compacts the heap. The task for the garbage collector to optimize memory usage will become more difficult as the number of "dead" objects stay in use during the run of the application. In the application where large numbers of objects are created, the influence of the Java Virtual Machine on the performance of the application can be significant. Therefore, it is important to reduce the number of objects in use to increase the performance of both Java Virtual Machine and your application. Here are some potential memory-related pitfalls in Java: Adding objects to collections or arrays and forgetting about them. Resetting the reference to an object on the next use. If the routine in which the reference is reset is not called, the objects stay in memory and will not be garbage-collected. Changing the state to an object when there is still a reference to the old state. Having a reference that is pinned by a long-running thread. Even if you set the object reference to NULL, it will not be garbagecollected until the thread terminates. Using system resources that are not freed up. For example, it is a little known fact that the Abstract Windowing Toolkit (AWT) for Sun Java will not be cleaned with the Garbage Collector. It requires a call to the method dispose() in order to free the system resources. What Steps Can You Take to Reduce Memory Overhead? The first step in dealing with the memory overhead is to determine how much memory your application consumes. A memory profile of your Java application will help you to determine memory hotspots in the program. After localizing the hotspots and memory bottlenecks, you can decide upon further steps in optimizing memory consumption throughout the run. Monitoring garbage collection during the run can give you additional information about how the Java Virtual Machine manages memory for your application. Similar to the case of execution time analysis, you have a choice between writing your own test harnesses or choosing a commercial tool for the job.

93 Java provides you with the methods freememory() and totalmemory() in Java.lang that you can use for testing the memory usage. FreeMemory() returns the amount of unused memory and totalmemory() returns the total amount of memory. Combining the two can give you a pretty good picture of memory usage in the tested part of the code. In actual use it can look like this: //create the variables Vector vec = new Vector() Object obj = null; Runtime runtime = Runtime.getRuntime(); Long occupiedmemory = 0; //run system.gc() occupiedmemory = runtime.totalmemory() - runtime.freememory(); obj = new Vector(); occupiedmemory = (runtime.totalmemory() - runtime.freememory()) - occupiedmemory(); //print results Please note that it is necessary to run System.gc() in order to engage garbage collection prior to measuring the total and free memory. On Java 2 Virtual Machine, System.gc() and Runtime.getRuntim().gc() do not give a full guarantee that the garbage collector will free memory directly after gc() is executed, but it will happen in a vast majority of the cases. An example of this method can be found in the appendix. It is a small applet that leaks memory. Calls to totalmemory() and freememory() are used to display memory usage from within the applet. This method has one big disadvantage, however: you first need to know the memory hotspot on which you want to concentrate. Therefore, this is not a good solution for complex Java applications. The solution in more complex situations would most probably be a commercial program that can create a memory profile of your application and mark the hotspots that need to be examined in detail. Rational Purify for Java is a memoryprofiling tool that does just that. The Java applet in the appendix of this article employs the usage of totalmemory() and freememory() explained above to show the amount of memory in use. The test example, called "LeakSample," has a built-in error that causes a memory leak. You can monitor how memory usage increases constantly even after some memory is garbage-collected. In this simple example provided in the appendix, you can easily see that the memory footprint of this application grows continuously. In a real-life situation, it would be much more difficult to detect such an error without a specialized tool. The worst-case scenario involves applications that will run for a long period of time, where even a small -- and therefore difficult to detect -- error could

94 lead to a crash. If you want to test the LeakSample without Purify for Java, you will need to add calls to System.gc() and check the amount of total and free memory shortly after you force the garbage collector to run. Figure 1 shows the screenshot when running LeakSample.class in Purify. Figure 1: Memory in Use Graph (View full size graphic in new window) Red dots on the diagram of Memory in use mark the garbage collection. Memory usage is increasing steadily during the run, but we'll concentrate on the leak itself later in the text. At this point I would like to show you how you can use Purify for Java to analyze the memory usage of your Java application during the run. Let's take a look at some details in the Purify report:

95 Figure 2: Call Graph (View full size graphic in new window) Figure 2 is the memory Call Graph. It displays the call chains of methods during the run. Please note that not all of the methods are shown, only those with significant memory usage. Other methods can be added to the report by expanding the view. The highlighted chain of calls is pointing to the function that uses most of the memory during the run. This function is the first candidate for detailed analysis -- a hotspot of the application. A Function detail view (Figure 3) shows that the memory gets allocated in this method; it is not one of its descendants that caused the extensive memory usage:

96 Figure 1.3: Function Detail Graph (View full size graphic in new window) Now that you have located the memory hotspot of your application, you can try to go back to the source of the method to optimize the memory usage of the Java program. This simple test case did not illustrate any of the more sophisticated errors, as explained earlier, but this is the method for defining the hotspots. Following are some general rules and suggestions that can help you to influence the memory consumption (although you must decide which solution path to follow): 1. Unfortunately, you cannot de-allocate the memory directly from your code as you can in C++. In fact, you were not even able to allocate it directly; the Java Virtual Machine did it for you. Java provides you with the method finalize(). You may declare it as a method for your classes. Note that using finalize() is not the same as calling a destructor in C++. The finalize() method will invoke a "finalizer" for a certain object, but it does not necessarily mean that the memory occupied by this object will be cleaned. It is the garbage collector that runs finalizers on objects, and it is the garbage collector's logic that decides when to actually free the

97 object. 2. If you need to keep large pools of objects in memory, you may reach the limits of available memory on your system; in this case, the WeakReferences function from java.lang.ref package in Java 2 for temporary objects offers a solution. Garbage collector will automatically clean WeakReferences as your program starts running out of memory. Consequently, these objects would need to be recreated later in the program run, instead of being reused, but this will lower the possibility of a crash due to running out of memory. 3. Java 2 also introduces SoftReferences. SoftReferences are cleared even before WeakReferences if the application memory usage gets high. SoftReferences are designed to be used for caches that need to be freed automatically. 4. Adjusting the heap parameters for the run can also be helpful for optimizing garbage collection in your code. By increasing the initial heap size (option -ms) and maximum heap size (option -mp) to values bigger than the default ones, the start of the garbage collector may be delayed. On the other hand, the garbage collector may need more time to free the objects and compact the heap because of the larger size of the heap. You can use the Memory-in- Use graph in Purify for Java to monitor the garbage collection intervals. The trick is to force garbage collection at intervals favorable to the overall application performance. 5. One of the best suggestions I can offer is to null out all the references to objects that are no longer needed. If you do this correctly, the garbage collector will do its part and clear the unused objects from memory. Memory Leaks Every C++ programmer knows the term "memory leak," which is one area of memory analysis that requires special attention. C++ does not have an automated system like Java's garbage collector that can mark and free memory that is no longer in use. Every time you allocate some memory on the heap using malloc, or new, you also have to make sure to clean it after it is no longer needed; otherwise, the objects that you have allocated will continue to occupy precious memory space until the application terminates. If you allocate memory in a loop and let the application run for a longer period of time, the application will potentially "eat" all the memory available and eventually crash. There are several different ways of checking your C++ code for leaks. In Microsoft Visual C++, for example, you can use the debugger and the CRT debug heap functions, or you can use some of the specialized tools, like Rational Purify, that perform the checking for memory leaks by default at the close-up of the application and give you an error report as shown in Figure 4

98 Figure 4: Memory Leak in C++ (View full size graphic in new window) In C++ a memory leak can, under certain circumstances, cause an application to crash if the reference (edge) points to a memory area where the object no longer exists (dangling pointer). It is very difficult to detect such an error without a specialized tool capable of run-time error checking. Rational Purify reports any FMR (Free Memory Read) or FMW (Free Memory Write) error on a piece of code that tries to access memory that is already de-allocated. In the following example, the program is trying to access memory that has been deleted earlier in the run. int *ptr = new int[2]; ptr[0] = 0; ptr[1] = 1; delete[] ptr; //Bug: put here accidentally instead of after for() loop below for (int i=0; i < 2; i++) { //FMR: these access memory which was already deallocated cerr << "element #" << i << " is " << ptr[i] << '\n';

99 Figure 5: Dangling Pointer in C++ (View full size graphic in new window) Purify reports this error as FMR (Free Memory Read) and also points you to the allocation location for the node that it is missing. The C++ compiler is not able to detect such an error, and without help from a run-time errorchecking tool like Purify there is always a possibility that a show stopper like FMR gets shipped to the customer. What Does a Memory Leak Look Like in Java? A memory leak in Java could be best explained as Memory in Use in C++ terms, i.e., it is an object in memory that still has references to it. If such an object is no longer needed, it is a memory leak. "Dangling pointers" do not represent such an obvious show-stopper in a Java program as they do in C++. In Java, you cannot remove the object directly from your code; you can only remove the reference to it. A Java object with a "hanging" edge will be skipped by the garbage collector and continue occupying memory space, thus "only" creating a memory leak. However a memory leak such as this can also lead to the crash of Java application if this unused object keeps references to other objects on the heap without being freed by the garbage collector. In extreme cases, such an application could use all the available memory and crash.

100 How Can You Detect Memory Leaks in Your Java Code Using Rational Purify for Java? Let's make use of the test application in the appendix, "LeakSample," again. It has a built-in memory leak that Rational Purify for Java can detect with ease. Please note that Purify for Java does not support Sun Hotspot Java Virtual Machine. You will need to use the option "-classic" with the Java executable. In order to find a memory leak, we will monitor the amount of free memory left after garbage collection does its work. When running the application in Rational Purify for Java, you can use the Snapshot feature to take snapshots of the memory available on the heap at two stages of running your application. To activate the memory leak in the LeakSample, you can choose the check box "Leak continuously" and click on "Start." Keep monitoring the heap on the Memory in Use graph. Stop the execution, force the garbage collection from the Purify GUI, and take a snapshot. Continue running the application and repeat the procedure later on when making a second snapshot. Use the tool "Compare runs" in order to compare the data from the two snapshots. The resulting CallGraph of the run will lead you directly to the memory leak:

101 Figure 6: Memory Leak in Java (View full size graphic in new window) A look at the function list of the compared runs confirms the obvious leak: Figure 7: Function List (View full size graphic in new window) The "Compare runs" tool will display only the memory used in between two snapshots. There is no doubt that the method LeakSample$Process.run() continuously causes memory to be allocated, but not freed. This type of error is common and extremely difficult to detect as the size of the application grows.

102 Conclusion Even with the help of modern techniques, your Java code may perform more slowly than desired. This is where you can take charge and make your Java code faster. Standard execution time analysis is only one way to profile your Java code. Profiling tools like Rational Quantify can give you the detailed timings of each of your methods and even the lines of code. However, due to Java's internal memory management, it is important to profile the memory usage of the Java application as well. References Rational Developer Tool documentation: Steve Wilson and Jeff Kesselman, "JAVA Platform Performance, Strategies and Tactics." Sun, Peter van der Linden, "Just JAVA." SunSoft, Jack Shirazi, JAVA Performance Tuning. O'Reilly, Craig Larman and Rhett Guthrie, JAVA 2, Performance and Idiom Guide. Prentice Hall, Appendix A LeakSample.java: Click to view code in text format. Click to download.java file. For more information on the products or services discussed in this article, please click here and follow the instructions provided. Thank you! Copyright Rational Software 2001 Privacy/Legal Information

103 Managing Teams by Joe Marasco Senior Vice President Rational Software This is the first installment of a two-part series that distills a good amount of hardearned experience in leading and managing groups into a few basic instructions for success. The blend of leadership and management strategies I describe are effective for both product and service-related efforts. If you've ever been in a leadership position, you may find that I am articulating much of what you've already discovered through experience -- and by applying common sense. Here I present four ideas; six more will follow in the next issue of The Rational Edge. 1. Focus on building a strong team that can solve hard problems and add genuine value for the customer. The key words here are focus, team, hard problems, and the customer. You need to have a focus; otherwise, your energy will not be well directed. And, as it is your team that will ultimately produce the results you need, your main focus should be on building and supporting that team. The best definition of team I've found is that of Katzenbach and Smith: 1 A team is a small number of people with complementary skills who are committed to a common purpose, performance goals, and approach for which they hold themselves mutually accountable. Your first challenge is to find the right combination of people with the right combination of skills and personal qualities. Then, to maintain a sharp edge, the team you assemble needs a performance challenge -- to tackle

From Craft to Science: Rules for Software Design -- Part II

From Craft to Science: Rules for Software Design -- Part II From Craft to Science: Rules for Software Design -- Part II by Koni Buhrer Software Engineering Specialist Rational Software Developing large software systems is notoriously difficult and unpredictable.

More information

Analysis and Design with the Universal Design Pattern

Analysis and Design with the Universal Design Pattern Analysis and Design with the Universal Design Pattern by Koni Buhrer Software Engineering Specialist Rational Software Developing large software systems is notoriously difficult and unpredictable. Software

More information

UNIT-I Introduction of Object Oriented Modeling

UNIT-I Introduction of Object Oriented Modeling UNIT-I Introduction of Object Oriented Modeling - Prasad Mahale Object Oriented Modeling and Reference Books: Design 1. Grady Booch, James Rumbaugh, Ivar Jacobson Unified Modeling Language User Guide,

More information

Building a New Rational Web Site with Rational Suite

Building a New Rational Web Site with Rational Suite Building a New Rational Web Site with Rational Suite by Christina Howe Director of Internet Services Rational Software In April of last year, Rational Software determined that its Web site no longer measured

More information

Rational Software White paper

Rational Software White paper Unifying Enterprise Development Teams with the UML Grady Booch Rational Software White paper 1 There is a fundamental paradox at play in contemporary software development. On the one hand, organizations

More information

Introduction. Chapter 1. What Is Visual Modeling? The Triangle for Success. The Role of Notation. History of the UML. The Role of Process

Introduction. Chapter 1. What Is Visual Modeling? The Triangle for Success. The Role of Notation. History of the UML. The Role of Process Quatrani_Ch.01.fm Page 1 Friday, October 27, 2000 9:02 AM Chapter 1 Introduction What Is Visual Modeling? The Triangle for Success The Role of Notation History of the UML The Role of Process What Is Iterative

More information

Change Management Process on Database Level within RUP Framework

Change Management Process on Database Level within RUP Framework Change Management Process on Database Level within RUP Framework ZELJKA CAR*, PETRA SVOBODA**, CORNELIA KRUSLIN** *Department of Telecommunications Faculty of Electrical Engineering Computing, University

More information

Building the User Interface: The Case for Continuous Development in an Iterative Project Environment

Building the User Interface: The Case for Continuous Development in an Iterative Project Environment Copyright Rational Software 2002 http://www.therationaledge.com/content/dec_02/m_uiiterativeenvironment_jc.jsp Building the User Interface: The Case for Continuous Development in an Iterative Project Environment

More information

The Web Service Sample

The Web Service Sample The Web Service Sample Catapulse Pacitic Bank The Rational Unified Process is a roadmap for engineering a piece of software. It is flexible and scalable enough to be applied to projects of varying sizes.

More information

Designing Component-Based Architectures with Rational Rose RealTime

Designing Component-Based Architectures with Rational Rose RealTime Designing Component-Based Architectures with Rational Rose RealTime by Reedy Feggins Senior System Engineer Rational Software Rose RealTime is a comprehensive visual development environment that delivers

More information

Software Development Chapter 1

Software Development Chapter 1 Software Development Chapter 1 1. Introduction Software Applications are increasingly used to tackle problems that concern everyday life : Automatic Bank tellers Airline reservation systems Air traffic

More information

10 Steps to Building an Architecture for Space Surveillance Projects. Eric A. Barnhart, M.S.

10 Steps to Building an Architecture for Space Surveillance Projects. Eric A. Barnhart, M.S. 10 Steps to Building an Architecture for Space Surveillance Projects Eric A. Barnhart, M.S. Eric.Barnhart@harris.com Howard D. Gans, Ph.D. Howard.Gans@harris.com Harris Corporation, Space and Intelligence

More information

Incremental development A.Y. 2018/2019

Incremental development A.Y. 2018/2019 Incremental development A.Y. 2018/2019 Incremental development Interleaves the activities of specification, development, and validation. The system is developed as a series of versions (increments), with

More information

RUP for Systems Z and other Legacy Systems

RUP for Systems Z and other Legacy Systems IBM Software Group RUP for Systems Z and other Legacy Systems Susan M Burk Senior Managing Consultant IBM smburk@us.ibm.com 413-726-9361 2006 IBM Corporation Agenda Objectives A Quick Introduction to RUP

More information

Specifying and Prototyping

Specifying and Prototyping Contents Specifying and Prototyping M. EVREN KIYMAÇ 2008639030 What is Specifying? Gathering Specifications Specifying Approach & Waterfall Model What is Prototyping? Uses of Prototypes Prototyping Process

More information

Getting a Quick Start with RUP

Getting a Quick Start with RUP Getting a Quick Start with RUP By: Doug Rosenberg and Jeff Kantor, ICONIX Software Engineering, Inc. Abstract Many people want the rigor of an industrial-strength process like the RUP but aren't quite

More information

Software Life Cycle. Main issues: Discussion of different life cycle models Maintenance or evolution

Software Life Cycle. Main issues: Discussion of different life cycle models Maintenance or evolution Software Life Cycle Main issues: Discussion of different life cycle models Maintenance or evolution Introduction software development projects are large and complex a phased approach to control it is necessary

More information

Integration With the Business Modeler

Integration With the Business Modeler Decision Framework, J. Duggan Research Note 11 September 2003 Evaluating OOA&D Functionality Criteria Looking at nine criteria will help you evaluate the functionality of object-oriented analysis and design

More information

An Integrated Approach to Documenting Requirements with the Rational Tool Suite

An Integrated Approach to Documenting Requirements with the Rational Tool Suite Copyright Rational Software 2002 http://www.therationaledge.com/content/dec_02/t_documentreqs_kd.jsp An Integrated Approach to Documenting Requirements with the Rational Tool Suite by Kirsten Denney Advisor

More information

1: Introduction to Object (1)

1: Introduction to Object (1) 1: Introduction to Object (1) 김동원 2003.01.20 Overview (1) The progress of abstraction Smalltalk Class & Object Interface The hidden implementation Reusing the implementation Inheritance: Reusing the interface

More information

White Paper. Rose PowerBuilder Link

White Paper. Rose PowerBuilder Link White Paper Rose PowerBuilder Link Contents Overview 1 Audience...1 The Software Development Landscape...1 The Nature of Software Development...1 Better Software Development Methods...1 Successful Software

More information

SYMANTEC: SECURITY ADVISORY SERVICES. Symantec Security Advisory Services The World Leader in Information Security

SYMANTEC: SECURITY ADVISORY SERVICES. Symantec Security Advisory Services The World Leader in Information Security SYMANTEC: SECURITY ADVISORY SERVICES Symantec Security Advisory Services The World Leader in Information Security Knowledge, as the saying goes, is power. At Symantec we couldn t agree more. And when it

More information

Work Environment and Computer Systems Development.

Work Environment and Computer Systems Development. CID-133 ISSN 1403-0721 Department of Numerical Analysis and Computer Science KTH Work Environment and Computer Systems Development. Jan Gulliksen and Bengt Sandblad CID, CENTRE FOR USER ORIENTED IT DESIGN

More information

Software Processes. Ian Sommerville 2006 Software Engineering, 8th edition. Chapter 4 Slide 1

Software Processes. Ian Sommerville 2006 Software Engineering, 8th edition. Chapter 4 Slide 1 Software Processes Ian Sommerville 2006 Software Engineering, 8th edition. Chapter 4 Slide 1 Objectives To introduce software process models To describe three generic process models and when they may be

More information

The Bizarre Truth! Automating the Automation. Complicated & Confusing taxonomy of Model Based Testing approach A CONFORMIQ WHITEPAPER

The Bizarre Truth! Automating the Automation. Complicated & Confusing taxonomy of Model Based Testing approach A CONFORMIQ WHITEPAPER The Bizarre Truth! Complicated & Confusing taxonomy of Model Based Testing approach A CONFORMIQ WHITEPAPER By Kimmo Nupponen 1 TABLE OF CONTENTS 1. The context Introduction 2. The approach Know the difference

More information

SOFTWARE ARCHITECTURE & DESIGN INTRODUCTION

SOFTWARE ARCHITECTURE & DESIGN INTRODUCTION SOFTWARE ARCHITECTURE & DESIGN INTRODUCTION http://www.tutorialspoint.com/software_architecture_design/introduction.htm Copyright tutorialspoint.com The architecture of a system describes its major components,

More information

CHAPTER 1. Topic: UML Overview. CHAPTER 1: Topic 1. Topic: UML Overview

CHAPTER 1. Topic: UML Overview. CHAPTER 1: Topic 1. Topic: UML Overview CHAPTER 1 Topic: UML Overview After studying this Chapter, students should be able to: Describe the goals of UML. Analyze the History of UML. Evaluate the use of UML in an area of interest. CHAPTER 1:

More information

Lecture Notes UML UNIT-II. Subject: OOAD Semester: 8TH Course No: CSE-802

Lecture Notes UML UNIT-II. Subject: OOAD Semester: 8TH Course No: CSE-802 UNIT-II Lecture Notes On UML IMPORTANCE OF MODELING, BRIEF OVERVIEW OF OBJECT MODELING TECHNOLOGY (OMT) BY RAMBAUGH, BOOCH METHODOLOGY, USE CASE DRIVE APPROACH (OOSE) BY JACKOBSON. KHALID AMIN AKHOON 1

More information

6.001 Notes: Section 8.1

6.001 Notes: Section 8.1 6.001 Notes: Section 8.1 Slide 8.1.1 In this lecture we are going to introduce a new data type, specifically to deal with symbols. This may sound a bit odd, but if you step back, you may realize that everything

More information

Session 8: UML The Unified Modeling (or the Unstructured Muddling) language?

Session 8: UML The Unified Modeling (or the Unstructured Muddling) language? Session 8: UML The Unified Modeling (or the Unstructured Muddling) language? A few observations, opinions, pros & cons COMP 320 / 420 Spring, 2018 Mr. Weisert Where did the UML come from? Object-oriented

More information

Introduction to Software Engineering

Introduction to Software Engineering Introduction to Software Engineering Gérald Monard Ecole GDR CORREL - April 16, 2013 www.monard.info Bibliography Software Engineering, 9th ed. (I. Sommerville, 2010, Pearson) Conduite de projets informatiques,

More information

History of object-oriented approaches

History of object-oriented approaches Prof. Dr. Nizamettin AYDIN naydin@yildiz.edu.tr http://www.yildiz.edu.tr/~naydin Object-Oriented Oriented Systems Analysis and Design with the UML Objectives: Understand the basic characteristics of object-oriented

More information

Reducing the costs of rework. Coping with change. Software prototyping. Ways to Cope with change. Benefits of prototyping

Reducing the costs of rework. Coping with change. Software prototyping. Ways to Cope with change. Benefits of prototyping Coping with change Change is inevitable in all large software projects. Business changes lead to new and changed system requirements New technologies open up new possibilities for improving implementations

More information

Chapter 2 Overview of the Design Methodology

Chapter 2 Overview of the Design Methodology Chapter 2 Overview of the Design Methodology This chapter presents an overview of the design methodology which is developed in this thesis, by identifying global abstraction levels at which a distributed

More information

Architectural Blueprint The 4+1 View Model of Software Architecture. Philippe Kruchten

Architectural Blueprint The 4+1 View Model of Software Architecture. Philippe Kruchten Architectural Blueprint The 4+1 View Model of Software Architecture Philippe Kruchten Model What is a model? simplified abstract representation information exchange standardization principals (involved)

More information

*ANSWERS * **********************************

*ANSWERS * ********************************** CS/183/17/SS07 UNIVERSITY OF SURREY BSc Programmes in Computing Level 1 Examination CS183: Systems Analysis and Design Time allowed: 2 hours Spring Semester 2007 Answer ALL questions in Section A and TWO

More information

SOFTWARE ENGINEERING. Lecture 6. By: Latifa ALrashed. Networks and Communication Department

SOFTWARE ENGINEERING. Lecture 6. By: Latifa ALrashed. Networks and Communication Department 1 SOFTWARE ENGINEERING Networks and Communication Department Lecture 6 By: Latifa ALrashed Outline q q q q q q q q Define the concept of the software life cycle in software engineering. Identify the system

More information

Foundation Level Syllabus Usability Tester Sample Exam

Foundation Level Syllabus Usability Tester Sample Exam Foundation Level Syllabus Usability Tester Sample Exam Version 2017 Provided by German Testing Board Copyright Notice This document may be copied in its entirety, or extracts made, if the source is acknowledged.

More information

The Analysis and Proposed Modifications to ISO/IEC Software Engineering Software Quality Requirements and Evaluation Quality Requirements

The Analysis and Proposed Modifications to ISO/IEC Software Engineering Software Quality Requirements and Evaluation Quality Requirements Journal of Software Engineering and Applications, 2016, 9, 112-127 Published Online April 2016 in SciRes. http://www.scirp.org/journal/jsea http://dx.doi.org/10.4236/jsea.2016.94010 The Analysis and Proposed

More information

Software Architecture

Software Architecture Software Architecture Does software architecture global design?, architect designer? Overview What is it, why bother? Architecture Design Viewpoints and view models Architectural styles Architecture asssessment

More information

VO Software Engineering

VO Software Engineering Administrative Issues Univ.Prof. Dr. Peter Auer Chair for Information Technology Email: auer@unileoben.ac.at Lecture Thursday 10:15 11:45 Project Lab Montag 16:00 19:00 Literature Helmut Balzert, Lehrbuch

More information

COMP6471 WINTER User-Centered Design

COMP6471 WINTER User-Centered Design COMP6471 WINTER 2003 User-Centered Design Instructor: Shahriar Ameri, Ph.D. Student: Pedro Maroun Eid, ID# 5041872. Date of Submission: Monday, March 10, 2003. (Week 9) Outline Outline... 2 ABSTRACT...3

More information

An Architect s Point of View. TSP Symposium Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213

An Architect s Point of View. TSP Symposium Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213 An Architect s Point of View on TSP TSP Symposium 2011 Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213 Felix Bachmann 09/2011 An Architect s Point of View on TSP 2 The Good

More information

I am Stephen LeTourneau from Sandia National Laboratories Sandia s National Security Missions include: Nuclear Weapons Defense Systems & Assessments

I am Stephen LeTourneau from Sandia National Laboratories Sandia s National Security Missions include: Nuclear Weapons Defense Systems & Assessments I am Stephen LeTourneau from Sandia National Laboratories Sandia s National Security Missions include: Nuclear Weapons Defense Systems & Assessments Energy, Climate & Infrastructure Security International,

More information

Effective Threat Modeling using TAM

Effective Threat Modeling using TAM Effective Threat Modeling using TAM In my blog entry regarding Threat Analysis and Modeling (TAM) tool developed by (Application Consulting and Engineering) ACE, I have watched many more Threat Models

More information

Implementing ITIL v3 Service Lifecycle

Implementing ITIL v3 Service Lifecycle Implementing ITIL v3 Lifecycle WHITE PAPER introduction GSS INFOTECH IT services have become an integral means for conducting business for all sizes of businesses, private and public organizations, educational

More information

Chapter 6 Architectural Design. Chapter 6 Architectural design

Chapter 6 Architectural Design. Chapter 6 Architectural design Chapter 6 Architectural Design 1 Topics covered Architectural design decisions Architectural views Architectural patterns Application architectures 2 Software architecture The design process for identifying

More information

1 Executive Overview The Benefits and Objectives of BPDM

1 Executive Overview The Benefits and Objectives of BPDM 1 Executive Overview The Benefits and Objectives of BPDM This is an excerpt from the Final Submission BPDM document posted to OMG members on November 13 th 2006. The full version of the specification will

More information

Introduction to Software Engineering

Introduction to Software Engineering Chapter 1 Introduction to Software Engineering Content 1. Introduction 2. Components 3. Layered Technologies 4. Generic View of Software Engineering 4. Generic View of Software Engineering 5. Study of

More information

Move Beyond Primitive Drawing Tools with SAP Sybase PowerDesigner Create and Manage Business Change in Your Enterprise Architecture

Move Beyond Primitive Drawing Tools with SAP Sybase PowerDesigner Create and Manage Business Change in Your Enterprise Architecture SAP Sybase PowerDesigner Move Beyond Primitive Drawing Tools with SAP Sybase PowerDesigner Create and Manage Business Change in Your Enterprise Architecture Table of Contents 3 Add Intelligence to the

More information

Ch 1: The Architecture Business Cycle

Ch 1: The Architecture Business Cycle Ch 1: The Architecture Business Cycle For decades, software designers have been taught to build systems based exclusively on the technical requirements. Software architecture encompasses the structures

More information

Scenarios, Quality Attributes, and Patterns: Capturing and Using their Synergistic Relationships for Product Line Architectures

Scenarios, Quality Attributes, and Patterns: Capturing and Using their Synergistic Relationships for Product Line Architectures Scenarios, Quality Attributes, and Patterns: Capturing and Using their Synergistic Relationships for Product Line Architectures Muhammad Ali Babar National ICT Australia Ltd. and University of New South

More information

Seminar report Software reuse

Seminar report Software reuse A Seminar report On Software reuse Submitted in partial fulfillment of the requirement for the award of degree of Bachelor of Technology in Computer Science SUBMITTED TO: www.studymafia.com SUBMITTED BY:

More information

NOTES ON OBJECT-ORIENTED MODELING AND DESIGN

NOTES ON OBJECT-ORIENTED MODELING AND DESIGN NOTES ON OBJECT-ORIENTED MODELING AND DESIGN Stephen W. Clyde Brigham Young University Provo, UT 86402 Abstract: A review of the Object Modeling Technique (OMT) is presented. OMT is an object-oriented

More information

Software Development Methodologies

Software Development Methodologies Software Development Methodologies Lecturer: Raman Ramsin Lecture 8 Agile Methodologies: XP 1 extreme Programming (XP) Developed by Beck in 1996. The first authentic XP book appeared in 1999, with a revised

More information

The data quality trends report

The data quality trends report Report The 2015 email data quality trends report How organizations today are managing and using email Table of contents: Summary...1 Research methodology...1 Key findings...2 Email collection and database

More information

A Comparison of the Booch Method and Shlaer-Mellor OOA/RD

A Comparison of the Booch Method and Shlaer-Mellor OOA/RD A Comparison of the Booch Method and Shlaer-Mellor OOA/RD Stephen J. Mellor Project Technology, Inc. 7400 N. Oracle Rd., Suite 365 Tucson Arizona 85704 520 544-2881 http://www.projtech.com 2 May 1993 The

More information

Architecture of Business Systems Architecture and the Role of the Architect

Architecture of Business Systems Architecture and the Role of the Architect Sandro Schwedler Wolfram Richter Architecture of Business Systems Architecture and the Role of the Architect Lecture Outline Introduction (W) Lecture Overview Architecture & role of the Architect Views

More information

INTRODUCING A MULTIVIEW SOFTWARE ARCHITECTURE PROCESS BY EXAMPLE Ahmad K heir 1, Hala Naja 1 and Mourad Oussalah 2

INTRODUCING A MULTIVIEW SOFTWARE ARCHITECTURE PROCESS BY EXAMPLE Ahmad K heir 1, Hala Naja 1 and Mourad Oussalah 2 INTRODUCING A MULTIVIEW SOFTWARE ARCHITECTURE PROCESS BY EXAMPLE Ahmad K heir 1, Hala Naja 1 and Mourad Oussalah 2 1 Faculty of Sciences, Lebanese University 2 LINA Laboratory, University of Nantes ABSTRACT:

More information

Information Security and Service Management. Security and Risk Management ISSM and ITIL/ITSM Interrelationship

Information Security and Service Management. Security and Risk Management ISSM and ITIL/ITSM Interrelationship Information Security and Service Management for Management better business for State outcomes & Local Governments Security and Risk Management ISSM and ITIL/ITSM Interrelationship Introduction Over the

More information

Integrating Domain Specific Modeling into the Production Method of a Software Product Line

Integrating Domain Specific Modeling into the Production Method of a Software Product Line Integrating Domain Specific Modeling into the Production Method of a Software Product Line Gary J. Chastek Software Engineering Institute John D. McGregor Clemson University Introduction This paper describes

More information

Definition of Information Systems

Definition of Information Systems Information Systems Modeling To provide a foundation for the discussions throughout this book, this chapter begins by defining what is actually meant by the term information system. The focus is on model-driven

More information

Welcome to this IBM Rational podcast, enhanced. development and delivery efficiency by improving initial

Welcome to this IBM Rational podcast, enhanced. development and delivery efficiency by improving initial IBM Podcast [ MUSIC ] GIST: Welcome to this IBM Rational podcast, enhanced development and delivery efficiency by improving initial core quality. I'm Kimberly Gist with IBM. Catching defects earlier in

More information

Up and Running Software The Development Process

Up and Running Software The Development Process Up and Running Software The Development Process Success Determination, Adaptative Processes, and a Baseline Approach About This Document: Thank you for requesting more information about Up and Running

More information

Deliver robust products at reduced cost by linking model-driven software testing to quality management.

Deliver robust products at reduced cost by linking model-driven software testing to quality management. Quality management White paper September 2009 Deliver robust products at reduced cost by linking model-driven software testing to quality management. Page 2 Contents 2 Closing the productivity gap between

More information

The Business Value of Metadata for Data Governance: The Challenge of Integrating Packaged Applications

The Business Value of Metadata for Data Governance: The Challenge of Integrating Packaged Applications The Business Value of Metadata for Data Governance: The Challenge of Integrating Packaged Applications By Donna Burbank Managing Director, Global Data Strategy, Ltd www.globaldatastrategy.com Sponsored

More information

Systems Analysis and Design in a Changing World, Fourth Edition

Systems Analysis and Design in a Changing World, Fourth Edition Systems Analysis and Design in a Changing World, Fourth Edition Systems Analysis and Design in a Changing World, 4th Edition Learning Objectives Explain the purpose and various phases of the systems development

More information

Software Service Engineering

Software Service Engineering Software Service Engineering Lecture 4: Unified Modeling Language Doctor Guangyu Gao Some contents and notes selected from Fowler, M. UML Distilled, 3rd edition. Addison-Wesley Unified Modeling Language

More information

Architectural Blueprint

Architectural Blueprint IMPORTANT NOTICE TO STUDENTS These slides are NOT to be used as a replacement for student notes. These slides are sometimes vague and incomplete on purpose to spark a class discussion Architectural Blueprint

More information

Sample Exam Syllabus

Sample Exam Syllabus ISTQB Foundation Level 2011 Syllabus Version 2.9 Release Date: December 16th, 2017. Version.2.9 Page 1 of 46 Dec 16th, 2017 Copyright 2017 (hereinafter called ISTQB ). All rights reserved. The authors

More information

Migrating to Object Data Management

Migrating to Object Data Management Migrating to Object Data Management Arthur M. Keller * Stanford University and Persistence Software Paul Turner Persistence Software Abstract. We discuss issues of migrating to object data management.

More information

Lecture 2: Software Engineering (a review)

Lecture 2: Software Engineering (a review) Lecture 2: Software Engineering (a review) Kenneth M. Anderson Object-Oriented Analysis and Design CSCI 6448 - Spring Semester, 2003 Credit where Credit is Due Some material presented in this lecture is

More information

SOFTWARE LIFE-CYCLE MODELS 2.1

SOFTWARE LIFE-CYCLE MODELS 2.1 SOFTWARE LIFE-CYCLE MODELS 2.1 Outline Software development in theory and practice Software life-cycle models Comparison of life-cycle models 2.2 Software Development in Theory Ideally, software is developed

More information

Recalling the definition of design as set of models let's consider the modeling of some real software.

Recalling the definition of design as set of models let's consider the modeling of some real software. Software Design and Architectures SE-2 / SE426 / CS446 / ECE426 Lecture 3 : Modeling Software Software uniquely combines abstract, purely mathematical stuff with physical representation. There are numerous

More information

SYSPRO s Fluid Interface Design

SYSPRO s Fluid Interface Design SYSPRO s Fluid Interface Design Introduction The world of computer-user interaction has come a long way since the beginning of the Graphical User Interface, but still most application interfaces are not

More information

Applying ISO/IEC Quality Model to Quality Requirements Engineering on Critical Software

Applying ISO/IEC Quality Model to Quality Requirements Engineering on Critical Software Applying ISO/IEC 9126-1 Quality Model to Quality Engineering on Critical Motoei AZUMA Department of Industrial and Management Systems Engineering School of Science and Engineering Waseda University azuma@azuma.mgmt.waseda.ac.jp

More information

Evaluation of Commercial Web Engineering Processes

Evaluation of Commercial Web Engineering Processes Evaluation of Commercial Web Engineering Processes Andrew McDonald and Ray Welland Department of Computing Science, University of Glasgow, Glasgow, Scotland. G12 8QQ. {andrew, ray}@dcs.gla.ac.uk, http://www.dcs.gla.ac.uk/

More information

PAGE - 16 PAGE - 1. Sometimes, the solution is just a benchmark away..

PAGE - 16 PAGE - 1. Sometimes, the solution is just a benchmark away.. PAGE - 16 PAGE - 1 Sometimes, the solution is just a benchmark away.. Post Box 301532, Riyadh 11372, Kingdom Of Saudi Arabia. Tel: +966 1 229 1819 Fax: +966 1 229 1801 PAGE - 2 PAGE - 3 The base of automation

More information

WHAT IS SOFTWARE ARCHITECTURE?

WHAT IS SOFTWARE ARCHITECTURE? WHAT IS SOFTWARE ARCHITECTURE? Chapter Outline What Software Architecture Is and What It Isn t Architectural Structures and Views Architectural Patterns What Makes a Good Architecture? Summary 1 What is

More information

Developing Software Applications Using Middleware Infrastructure: Role Based and Coordination Component Framework Approach

Developing Software Applications Using Middleware Infrastructure: Role Based and Coordination Component Framework Approach Developing Software Applications Using Middleware Infrastructure: Role Based and Coordination Component Framework Approach Ninat Wanapan and Somnuk Keretho Department of Computer Engineering, Kasetsart

More information

SOME TYPES AND USES OF DATA MODELS

SOME TYPES AND USES OF DATA MODELS 3 SOME TYPES AND USES OF DATA MODELS CHAPTER OUTLINE 3.1 Different Types of Data Models 23 3.1.1 Physical Data Model 24 3.1.2 Logical Data Model 24 3.1.3 Conceptual Data Model 25 3.1.4 Canonical Data Model

More information

Diseño y Evaluación de Arquitecturas de Software. Architecture Based Design Method

Diseño y Evaluación de Arquitecturas de Software. Architecture Based Design Method Diseño y Evaluación de Arquitecturas de Software Architecture Based Design Method César Julio Bustacara Medina Facultad de Ingeniería Pontificia Universidad Javeriana 08/10/2015 1 Architecture Based Design

More information

Topic : Object Oriented Design Principles

Topic : Object Oriented Design Principles Topic : Object Oriented Design Principles Software Engineering Faculty of Computing Universiti Teknologi Malaysia Objectives Describe the differences between requirements activities and design activities

More information

Scheduling & Rationality

Scheduling & Rationality Scheduling & Rationality SOE MM10 Scheduling & Tracking Why Are Projects Late? An unrealistic deadline established by outsiders Changing customer requirements not reflected in schedule changes An honest

More information

Vulnerability Assessments and Penetration Testing

Vulnerability Assessments and Penetration Testing CYBERSECURITY Vulnerability Assessments and Penetration Testing A guide to understanding vulnerability assessments and penetration tests. OVERVIEW When organizations begin developing a strategy to analyze

More information

Sample Exam. Advanced Test Automation - Engineer

Sample Exam. Advanced Test Automation - Engineer Sample Exam Advanced Test Automation - Engineer Questions ASTQB Created - 2018 American Software Testing Qualifications Board Copyright Notice This document may be copied in its entirety, or extracts made,

More information

Six Weeks to Security Operations The AMP Story. Mike Byrne Cyber Security AMP

Six Weeks to Security Operations The AMP Story. Mike Byrne Cyber Security AMP Six Weeks to Security Operations The AMP Story Mike Byrne Cyber Security AMP 1 Agenda Introductions The AMP Security Operations Story Lessons Learned 2 Speaker Introduction NAME: Mike Byrne TITLE: Consultant

More information

Best Practices for Deploying Web Services via Integration

Best Practices for Deploying Web Services via Integration Tactical Guidelines, M. Pezzini Research Note 23 September 2002 Best Practices for Deploying Web Services via Integration Web services can assemble application logic into coarsegrained business services.

More information

CSC Advanced Object Oriented Programming, Spring Overview

CSC Advanced Object Oriented Programming, Spring Overview CSC 520 - Advanced Object Oriented Programming, Spring 2018 Overview Brief History 1960: Simula first object oriented language developed by researchers at the Norwegian Computing Center. 1970: Alan Kay

More information

USTGlobal INNOVATION INFORMATION TECHNOLOGY. Using a Test Design Tool to become a Digital Organization

USTGlobal INNOVATION INFORMATION TECHNOLOGY. Using a Test Design Tool to become a Digital Organization USTGlobal INNOVATION INFORMATION TECHNOLOGY Using a Test Design Tool to become a Digital Organization Overview: Automating test design reduces efforts and increases quality Automated testing resolves most

More information

The Process of Software Architecting

The Process of Software Architecting IBM Software Group The Process of Software Architecting Peter Eeles Executive IT Architect IBM UK peter.eeles@uk.ibm.com 2009 IBM Corporation Agenda IBM Software Group Rational software Introduction Architecture,

More information

UML for Real-Time Overview

UML for Real-Time Overview Abstract UML for Real-Time Overview Andrew Lyons April 1998 This paper explains how the Unified Modeling Language (UML), and powerful modeling constructs originally developed for the modeling of complex

More information

PERSPECTIVE. End-to-end test automation A behaviordriven and tool-agnostic approach. Abstract

PERSPECTIVE. End-to-end test automation A behaviordriven and tool-agnostic approach. Abstract PERSPECTIVE End-to-end test automation A behaviordriven and tool-agnostic approach Anand Avinash Tambey Product Technical Architect, Infosys Abstract In today s fast changing world, IT is under constant

More information

SCOS-2000 Technical Note

SCOS-2000 Technical Note SCOS-2000 Technical Note MDA Study Prototyping Technical Note Document Reference: Document Status: Issue 1.0 Prepared By: Eugenio Zanatta MDA Study Prototyping Page: 2 Action Name Date Signature Prepared

More information

SOFTWARE DESIGN COSC 4353 / Dr. Raj Singh

SOFTWARE DESIGN COSC 4353 / Dr. Raj Singh SOFTWARE DESIGN COSC 4353 / 6353 Dr. Raj Singh UML - History 2 The Unified Modeling Language (UML) is a general purpose modeling language designed to provide a standard way to visualize the design of a

More information

Coding and Unit Testing! The Coding Phase! Coding vs. Code! Coding! Overall Coding Language Trends!

Coding and Unit Testing! The Coding Phase! Coding vs. Code! Coding! Overall Coding Language Trends! Requirements Spec. Design Coding and Unit Testing Characteristics of System to be built must match required characteristics (high level) Architecture consistent views Software Engineering Computer Science

More information

Software Life-Cycle Management

Software Life-Cycle Management Ingo Arnold Department Computer Science University of Basel Introduction Software Life-Cycle Management Architecture Handbook View Model Architecture View Models If this is real world s physical complexity..

More information

Content Management for the Defense Intelligence Enterprise

Content Management for the Defense Intelligence Enterprise Gilbane Beacon Guidance on Content Strategies, Practices and Technologies Content Management for the Defense Intelligence Enterprise How XML and the Digital Production Process Transform Information Sharing

More information

Minsoo Ryu. College of Information and Communications Hanyang University.

Minsoo Ryu. College of Information and Communications Hanyang University. Software Reuse and Component-Based Software Engineering Minsoo Ryu College of Information and Communications Hanyang University msryu@hanyang.ac.kr Software Reuse Contents Components CBSE (Component-Based

More information

3Lesson 3: Web Project Management Fundamentals Objectives

3Lesson 3: Web Project Management Fundamentals Objectives 3Lesson 3: Web Project Management Fundamentals Objectives By the end of this lesson, you will be able to: 1.1.11: Determine site project implementation factors (includes stakeholder input, time frame,

More information