INNOVATIVE ENGINEERING TECHNIQUE OF OBJECT ORIENTED SOFTWARE DEVELOPMENT

Similar documents
Technical Metrics for OO Systems

CHAPTER 4 HEURISTICS BASED ON OBJECT ORIENTED METRICS

Effective Modular Design

Application of Object Oriented Metrics to Java and C Sharp: Comparative Study

Risk-based Object Oriented Testing

Write perfect C code to solve the three problems below.

Metrics and OO. SE 3S03 - Tutorial 12. Alicia Marinache. Week of Apr 04, Department of Computer Science McMaster University

Overview. State-of-the-Art. Relative cost of error correction. CS 619 Introduction to OO Design and Development. Testing.

FOR0383 Software Quality Assurance

Efficient Regression Test Model for Object Oriented Software

Chapter 9. Software Testing

Test Driven Development TDD

Research Article ISSN:

Using Metrics To Manage Software Risks. 1. Introduction 2. Software Metrics 3. Case Study: Measuring Maintainability 4. Metrics and Quality

Object Oriented Metrics. Impact on Software Quality

Measuring the quality of UML Designs

Impact of Dependency Graph in Software Testing

Gradational conception in Cleanroom Software Development

Systems Analysis and Design in a Changing World, Fourth Edition

OBJECT ORIENTED SYSTEM DEVELOPMENT Software Development Dynamic System Development Information system solution Steps in System Development Analysis

COURSE 11 DESIGN PATTERNS

Investigation of Metrics for Object-Oriented Design Logical Stability

Verification and Validation

Lecture 15 Software Testing

QUIZ #5 - Solutions (5pts each)

A Comparative Study on State Programming: Hierarchical State Machine (HSM) Pattern and State Pattern

Agile Manifesto & XP. Topics. Rapid software development. Agile methods. Chapter ) What is Agile trying to do?

Towards Cohesion-based Metrics as Early Quality Indicators of Faulty Classes and Components

EECS 4313 Software Engineering Testing

Influence of Design Patterns Application on Quality of IT Solutions

Levels of Testing Testing Methods Test Driven Development JUnit. Testing. ENGI 5895: Software Design. Andrew Vardy

Quality Metrics Tool for Object Oriented Programming

Refactoring and Rearchitecturing

Software Testing Strategies. Slides copyright 1996, 2001, 2005, 2009, 2014 by Roger S. Pressman. For non-profit educational use only

Chapter 8 Software Testing. Chapter 8 Software testing

Software Testing Techniques

Quantify the project. Better Estimates. Resolve Software crises

Topic 01. Software Engineering, Web Engineering, agile methodologies.

Analysis of Various Software Metrics Used To Detect Bad Smells

Testing in the Agile World

CSC 408F/CSC2105F Lecture Notes

Software Design & Evolution. Lecture 04. You cannot control what you cannot measure. Metrics & Problem Detection. Michele Lanza

IMPACT OF DEPENDENCY GRAPH IN SOFTWARE TESTING

Ingegneria del Software Corso di Laurea in Informatica per il Management. Software quality and Object Oriented Principles

Software Design and Analysis CSCI 2040

Patterns and Testing

10. Software Testing Fundamental Concepts

Levels of Testing Testing Methods Test Driven Development JUnit. Testing. ENGI 5895: Software Design. Andrew Vardy

Test Driven Development (TDD)

Living and Working with Aging Software. Ralph Johnson. University of Illinois at Urbana-Champaign

Empirical Evaluation and Critical Review of Complexity Metrics for Software Components

A NOVEL APPROACH FOR TEST SUITE PRIORITIZATION

Utilizing Fast Testing to Transform Java Development into an Agile, Quick Release, Low Risk Process

Taxonomy Dimensions of Complexity Metrics

Review Software Engineering October, 7, Adrian Iftene

Software Quality Analysis by Object Oriented Approach in Design Phase Using UML

Optimize tomorrow today.

Reusability Metrics for Object-Oriented System: An Alternative Approach

A Study of Bad Smells in Code

No Source Code. EEC 521: Software Engineering. Specification-Based Testing. Advantages

SOFTWARE ARCHITECTURE & DESIGN INTRODUCTION

CS SOFTWARE ENGINEERING QUESTION BANK SIXTEEN MARKS

Analysis of the Test Driven Development by Example

Usability Evaluation of Software Testing Based on Analytic Hierarchy Process Dandan HE1, a, Can WANG2

CHAPTER 9 DESIGN ENGINEERING. Overview

THE BCS PROFESSIONAL EXAMINATION BCS Level 6 Professional Graduate Diploma in IT September 2017 EXAMINERS REPORT. Software Engineering 2

Software Life Cycle. Main issues: Discussion of different life cycle models Maintenance or evolution

An Eclipse Plug-in for Model Checking

Integration Testing Qualidade de Software 2

Introduction to Software Engineering

Abstract: Refactorings are used to improve the internal structure of software without changing its external behavior.

Object Oriented Software Design - I

Bridge Course On Software Testing

Achieving Right Automation Balance in Agile Projects

Software Design Fundamentals. CSCE Lecture 11-09/27/2016

MONIKA HEINER.

In this Lecture you will Learn: Testing in Software Development Process. What is Software Testing. Static Testing vs.

Visualizing Software Metrics for increased Refactoring

Black Box Testing. EEC 521: Software Engineering. Specification-Based Testing. No Source Code. Software Testing

Testing Component-Based Software

Keywords: Abstract Factory, Singleton, Factory Method, Prototype, Builder, Composite, Flyweight, Decorator.

Higher-order Testing. Stuart Anderson. Stuart Anderson Higher-order Testing c 2011

Automated Unit Testing A Practitioner's and Teacher's Perspective

Agile Accessibility. Presenters: Ensuring accessibility throughout the Agile development process

Principal Component Analysis of Lack of Cohesion in Methods (LCOM) metrics

Software Testing Fundamentals. Software Testing Techniques. Information Flow in Testing. Testing Objectives

A few more things about Agile and SE. Could help in interviews, but don t try to bluff your way through

Automated Testing of Tableau Dashboards

Software Engineering with Objects and Components Open Issues and Course Summary

Building in Quality: The Beauty of Behavior Driven Development (BDD) Larry Apke - Agile Coach

Appendix to The Health of Software Engineering Research

Introduction to software metics

How to Realization Architectural testing model using Measurement Metrics

Design Pattern What is a Design Pattern? Design Pattern Elements. Almas Ansari Page 1

Adopting Agile Practices

Chapter 10. Testing and Quality Assurance

Computing Accreditation Commission Version 2.0 CRITERIA FOR ACCREDITING COMPUTING PROGRAMS

Software Architecture

Designing with patterns - Refactoring. What is Refactoring?

CRITERIA FOR ACCREDITING COMPUTING PROGRAMS

Transcription:

Shrivastava etal.//vol. 3, /41-46 INNOVATIVE ENGINEERING TECHNIQUE OF OBJECT ORIENTED SOFTWARE DEVELOPMENT Divya Prakash Shrivastava* & R.C. Jain** Received: 26th Aug. Revised: 15th Nov.. Accepted: th Dec. Abstract Computer software is an engine of growth of socialeconomy development which requires new engineering techniques and strategies. The demand for quality in software applications has grown. Hence as software becomes more integrated into our lives, the effects of software failures become more acute. Testing is an essential part of software development providing an indicator of the quality of the software. Testing remains the primary way to improve reliability of software. Billions of dollars are spent on testing in the software industry, as testing usually accounts for more than half the cost of software development. The demand of new engineering technologies for quality in software applications has grown and awareness of software testing-related issues plays an important role of all testing levels. The unit level of testing has undergone the most recent and most dramatic change. With the introduction of new agile (aka, lightweight ) development methods, such as XP (extreme Programming) came the idea of Test-Driven Development (TDD). The TDD is a software development engineering technique that melds program design, implementation and testing in a series micro-iterations that focus on simplicity and feedback. Programmer tests are created using a unit testing framework and are % automated. This paper contributes in the development of TDD framework for testing of unit, which includes new engineering concepts and the methodologies issues in form of TDD techniques using V design model especially for engineering education, industrial training and software development. Key Words : Test Case, Unit Testing, Metrics, extreme Programming, TDD, ATCUT INTRODUCTION The world is passing through a major revolution The information revolution, in this revolution information and knowledge is available to people in unparalleled amounts. The information revolution is based on two major technologies Computer and Communication. LIPMAN LAYS DOWN THE LAW The rate of change of society is the function of the age at which you gain access to the dominant technology of the time. The dominant technology of today is computing. A society driven by cars changes more slowly than one based on Computer. Yes we can say that the Lipman s law is true because the life we are running through today is hanging rapidly along with the challenges. The information revolution in the 21st century, as always, will depend and relate to the life style in that period. The computer education in turn will have * Asstt. Prof. Department of Computer Science EL JABAL AL GARBI UNIV., Zawia, Libya. email : dpshrivastava@yahoo.com ** Director SATI, Vidisha to prepare the students for study, analysis, research and development of such applications in these directions typically wireless, large database, fastest CPUs, miniaturized hardware, large storage per unit of space, robotics, knowledge management systems and software development techniques. Software pervades every aspect of life: business, financial services, medical ervices, communication systems, entertainment and education are invariably dependent on software. With this increasing dependency on software, we expect software to be reliable, robust, safe and secure. Unfortunately, at the present time the reliability of day-to-day software is questionable. Computer software is an engine of growth of socialeconomy development which requires new engineering techniques and strategies. The demand for quality in software applications has grown. Hence as software becomes more integrated into our lives, the effects of software failures become more acute. Testing is an essential part of software development providing an indicator of the quality of the software[1]. Testing proves the presence, not the absence of bugs -E.W. Dijkstra Adequate testing of software trials prevent these tragedies to occur. Adequate testing however can be difficult if the software is extremely large and complex. This is because the amount of time and efforts required to execute a large set of test cases or regression test cases be significant [3]. Therefore, the more testing can be done with accuracy of test cases which assist in corresponding rise in program transformation. The unit test provides the lowest level of testing during software development, where the individual units of software are tested in isolation from other parts of program/software system. Automated testing is the other program that runs the program being tested, feeding it with proper input, and thus checking the output against the expected. Once the test case is written, no human intervene is needed thus the test case does all and indicate [2]. Amongst different types of program transformation, behaviour-preserving source-to-source transformations are known as refactoring [4]. Refactoring is the process of changing a software system in such a way that it does not alter the external behaviour of the code yet improves its internal structure [5]. The refactoring of test case may bring additional benefits to software quality and productivity, vis-a-vis cheaper detection of design flaws and easy exploration of alternative design 41

decisions. Consequently, the term code refactoring and test case refactoring can be made distinct. Thus, one of the main reasons for wide acceptance of refactoring as a design improvement technique and its subsequent adoption by Agile software methodologies, in particular extreme Programming (XP) [6]. The XP encourages the development teams to skip comprehensive initial architecture or design stages, guiding them its implementation activities according to user requirements and thus promoting successive code refactorings when inconsistencies are detected. TEST-DRIVEN DEVELOPMENT Test Driven Development (TDD) is the core part of the Agile code development approach derived frome Xtreme rogramming (XP) and the principles of the Agile manifesto. It provides to guarantee testability to reach an extremely high test coverage, to enhance developer confidence, for highly cohesive and loosely coupled systems, to allow larger teams of programmers to work on the same code base, as the code can be checked more often. It also encourages the explicitness about the scope of implementation. Equally it helps separating the logical and physical design, and thus to simplify the design, when only the code needed. Remove Bug Write a Test Run the Test Test Modify Code Run the Test Test The TDD is not a testing technique, rather a development and design technique in which tests are written prior to the production code. The tests are added its gradually during its implementation and when the test is passed, the code is refactored accordingly to improve the efficacy of internal structure of the code. The incremental cycle is repeated until all functionality is implemented to final. The TDD cycle consists of six fundamental steps: 1. Write a test for a piece of functionality, 2. Run all tests to see the new test to fail, 3. Write corresponding code that passes these tests, 4. Run the test to see all pass, 5. Refactor the code and 6. Run all tests to see the refactoring did not change the external behavior. The first step involves simply writing a piece of code to ensure the tests of desired functionality. The second is required to validate the correctness of test, i.e. the test must not pass at this point, because the behaviour under implementation must not exist as yet. Nonetheless, if the test passes over, means the test is either not testing correct behaviour or the TDD principles have not been strictly followed. The third step is the writing of the code. However, it should be kept in mind to only write as little code as possible to enable to pass the test. Next, step is to see that the change has not introduced any of the problems somewhere else in the system. Once all these tests are passed, then the internal structure of the code should be improved by refactoring. Therefore mentioned cycle is presented in Fig 1 Refactor Run the Test Test Fig 1. TDD Cycle. DESIGN OF MODEL Implementation of the design is a straightforward matter of translating the design into code, since most difficult decisions are made during design. The code should be a simple translation of the design decision into the culiarities of a peculiar language. Decision does have to be made while writing code, but each one should affect only a small part of the program so they can be changed easily. Test Driven Development involved analysis, design, implementation and testing are much interleaved activities performed in an incremental fashion. Here, I show figures of a unit and of an aggregation of units, which I will call a subsystem [7]. 42

Interface My unit Driver Stub Fig 4. Testing of a particular unit The V-model is one of the widely used model in many projects. Though its basically a verification-validation cycle, similar methods are being used for the testing activity too [8]. Requirement Analysis High Level Design Low Level Design Codong Fig. 2 : A Unit & Aggregation of Units SDLC Time taken in Development Fig. 3 : V Model STLC Product Acceptance Testing System Integration Testing Itegration Testing Test Design Doc. Automation Unit Testing Test Plan Due Test Clients Generation Time taken for Testing As we have seen the V model says That someone should first test each unit. When all the subsystem s units are tested, they should be aggregated the subsystem tested to see if it works as a whole. So how do we test the unit? We look at its interface as specified in the detailed design, or at the code, or at both, pick inputs that satisfy some test design criteria, feed those inputs to the interface, then check the results for correctness. Because the unit usually cannot be executed in isolation, we have to surround it with stubs and drivers, as in the fig. [4]. The arrow represents the execution trace of a test. A test case designed to find bugs in a particular unit might be best run with the unit in isolation, surrounded by nitspecific stubs and drivers. Test Stub Tral Driver xyz Unit to be Tested Test Stub Test Cases Fig 5. Test case CASE STUDY AND DISCUSSION This section describes the case study conducted in a large, software corporation. Two projects were developed by developers. The first project employed a test at last approach including automated unit-tests written in JUnit. The project was developed in a traditional mode with a large up-front design and automated and manual testing after the software was implemented. This project will be labeled as test at last (TAL). The second project is the same functionality. This project was implemented in 7 and 8 using a test at first (TDD) approach with automated unit tests. This project will be labelled as test at first (TAF). This section reports, compares, and describes the internal design quality metric results. The metrics are the method, class, and interface level. The design quality was assessed through six Automated Test Case of Unit Testing (ATCUT) metrics. The data were gathered from case projects, in order to make comparison between Test At First (Test Driven Development) and Test at Last traditional approach. In software development, test case design is the earliest stage on which the structure of the system is defined, and therefore, term "design" is generally used when referring to architectural design. The traditional product metrics such as size, complexity and performance are not sufficient for characterizing and assessing the quality of test case for OO software systems, and they should be supplemented with notions such as encapsulation and inheritance, which are inherent in object orientation. The OO design metrics for measuring the test case design quality effects of TDD within this study were selected from the ones proposed ATCUT metrics: Weighted Methods per Class (WMC), Depth of Inheritance Tree (DIT), Number Of Children (NOC), and Response For a Class (RFC). Dependency Inversion Principle 43

(DIP) metrics measure the ratio of dependencies that have abstract classes. Weighted Methods per Class (WMC) The WMC value presents class's complexity, and it can be used for predicting how much effort certain class requires when it is maintained. A high value of WMC implies that the class is probably less reusable and requires more aintaining effort, because it is likely to have a greater impact on its derived classes and it may be more application specific. Depth of Inheritance Tree (DIT) The DIT value measures the depth of each class within its hierarchy, and it can be utilized for predicting the omplexity and the potential reuse of a class. A high value of DIT indicates increased complexity, because more methods are involved, but also increased potential for reuse of inherited methods. Number Of Children (NOC) The NOC presents the number of class's immediate subclasses, and it may be used for assessing class's potential reuse and the effort the class requires when it is tested. A high value of NOC may be a sign of an improper use of ub classing and it implies that the class may require more testing, because it has many children using its methods. On the other hand, a high NOC value indicates greater potential for reuse, because inheritance is one form of reuse. Response For a Class (RFC) The value of RFC presents the number of methods that can be invoked in response to a message to an object of the class or by some method in the class. The testing and debugging of a class with a high value of RFC is difficult, because the increased amount of communication between classes requires more understanding on the part of the tester. It also indicates increased complexity. Dependency Inversion Principle (DIP) The Dependency Inversion Principle DIP state high level classes should not depend on low level classes i.e. bstractions should not depend upon the details. ANALYSIS OF ATCUT METRICS DATA A. Weighted Methods per Class (WMC) 35 25 15 35 25 15 5 1 4 7 13 16 19 22 25 Fig. 6 Graphical Representation of WMC Data By the WMC metric we can observe cyclomatic complexity of methods of a class. Since WMC metric can be found by the sum of complexity of all method. In our analysis we found in Project1, 13 classes have WMC out of classes; only 2 classes have WMC is more then 25. Since a class consist at least one function, so the lower limit of WMC is 1. In Project2, classes WMC is and only 1 class have WMC is 31. This result indicates that most of the classes have more polymorphism and less complexity. Low WMC indicates greater polymorphism in a class and high WMC indicates more complexity in the class. The WMC figures look quite similar for both Projects but test at first (TDD) development approach is quite better than test at last approach. B. Response For a Class (RFC) 6 5 4 7 6 5 4 1 3 5 7 9 11 13 15 17 19 5 1 3 5 7 9 11 13 15 17 19 1 4 7 1 3 16 19 22 25 Fig. 7 Graphical Representation of RFC Data 44

The RFC metric measured by the set of all methods and constructors that can be invoked as a result of a message sent to an object of the class. Eclipse Metric tool suggests the range of RFC should be to 5. A class with large RFC indicates the class is more complex and it s harder to maintain. In our analysis, Project1, 13 classes have RFC with threshold and other 7 classes have RFC with threshold more then. In Project2 have 25 classes, of them have RFC threshold is around. Only two classes contain RFC threshold more then 5. This result indicates that only 2 classes have to be modified to reduce complexity and most of the classes in test at first approach having less complexity. C. Depth of Inheritance Tree (DIT) The DIT metrics is the length of the maximum path from the node to the root of the tree. A DIT value of indicates a root. Since deeper trees constitute greater design complexity as more methods and classes are involved, so maximum DIT value of 5. DIT value of 2 and 3 indicates a higher degree of reuse. If there is a majority of DIT values bellow 2, it may represent poor exploitation of the advantages of OO design and inheritance. In our analysis, Project 1, 5% classes have DIT value is 2 and 5% classes have DIT value is 1. In Projects 2 have 25 classes, 15 of them have DIT value is 2 to 3 and 4 classes have DIT value is 1 and rest of DIT value is 4. This result indicates, classes of Project 2 are a higher degree of reuse and fewer complexes. D. Number of Children The NOC metric measures the number of direct subclass of a class. Since more children in a class have more responsibility, thus it is harder to modify the class and requires more testing. So NOC with less value is better and more NOC may indicate a misuse of sub classing. In our analysis, both Project1 and Project2 need less testing effort. E. Dependency Inversion Principle Now we will discuss the measurements of object oriented principles. The DIP principles are mainly used to avoid developing software which have bad symptom. The DIP metric measures the ratio of dependencies that have abstract classes. In our analysis, most of the classes from packag1 indicate.67 to.5 DIP whereas most of the classes rom Project2 indicates. DIP. This result shows the classes from Project 2 are more depend on abstract classes than Project1 s classes..6.5.4.3.2 3.5 3 2.5 2 1.5 1.1.6.5 1 3 5 7 9 11 13 15 17 19.5 1 3 5 7 9 1 1 1 3 1 5 1 7 1 9.4.3 4.5 4 3.5 3 2.5 2 1.5 1.5 1 4 7 16 19 22 25 Fig. 8 Graphical representation of DIT data.2.1 1 4 7 16 19 22 25 Fig. 9 Graphical representation of DIP data CONCLUSION There are many benefits to developing unit tests prior to implementing a software component. The first, and most obvious, is that unit testing forces development of the software to be pursued in a way that meets each requirement. The software is considered complete when it provides the functionality required to successfully execute the unit test, 45

and not before; and the requirement is strictly enforced and checked by the unit test. A second benefit is that it focuses the developer's efforts on satisfying the exact problem, rather than developing a larger solution that also happens to satisfy the requirement. This generally results in less code and a more straightforward implementation. A third, more subtle benefit is that the unit test provides a useful reference for determining what the developer intended to accomplish (versus what the requirements state). If there is any question as to the developer's interpretation of the requirements, it will be reflected in the unit test code This research has demonstrated that test at first (TAF) can and is likely to improve some test case quality aspects at minimal cost over a comparable test at last approach (TAL). In particular it has shown differences in the areas of code complexity, size, and other metrics which are very close related to test case design. These internal quality differences can substantially improve external software quality (defects), software maintainability, software understandability, and software reusability. As a result, it is believed that this research can have a significant impact on the state of software construction. Some software development organizations will be convinced to adopt TDD in appropriate situations. As developers and designers learn to take a more disciplined approach to software development, they will carry this approach into professional software organizations and improve the overall state of software construction. REFERENCES 1. A. Fodeh John and B. Svendsen Niels, Release Metrics : When to Stop Testing with a clear conscience, Journal of Software Testing Professionals, March 2 2. Volokh Eugene, VESOFT (199), Automated Testing. When and How, (Interact Magazine). 3. S. Elbaum, A. G. Malishevsky and G. Rothermel, Prioritizing Test Cases for Regression Testing, ACM SIGSOFT International Symposium on Software Testing and Analysis, Portland, Oregon, United States, ACM Press,, PP. 2-112. 4. Don Roberts. Practical Analysis for Refactoring, PhD thesis, University of Illinois at Urbana-Champaign, 1999. 5. B.-K. Kang and J. M. Bieman. A Quantitive Framework for Software Restructuring. Journal of Software Maintenance, 11, 1999, PP. 245-284. 6. M. B., Cohen P. B. Gibbons, W. B. Mugridge and C. J. Colbourn, Constructing Test Suites for Interaction Testing. 25th International Conference on Software Engineering (ICSE'), Portland, Oregon, United States, IEEE Computer Society, 3,PP.38-49. 7. Rosenberg, H Linda Applying and Interpreting Object Oriented Metrics, Software Assurance Technology Office (SATO). 8. W. Boehm Barry, A Spiral Model of Software Development and Enhancement, IEEE Computer, May, 1988. NITTTR, Bhopal 46