Analysis of the Test Driven Development by Example

Similar documents
Test First Software Development

An Introduction to Unit Testing

Object Oriented Software Design - I

Agile Manifesto & XP. Topics. Rapid software development. Agile methods. Chapter ) What is Agile trying to do?

Introduction to Extreme Programming

Tuesday, November 15. Testing

Extreme programming XP 6

Introduction to Extreme Programming. Extreme Programming is... Benefits. References: William Wake, Capital One Steve Metsker, Capital One Kent Beck

Unit Testing with JUnit and CppUnit

No Source Code. EEC 521: Software Engineering. Specification-Based Testing. Advantages

JUnit in EDA Introduction. 2 JUnit 4.3

EVALUATION COPY. Test-Driven Development Using NUnit and C# Student Guide Revision 4.6. Unauthorized reproduction or distribution is prohibited.

Black Box Testing. EEC 521: Software Engineering. Specification-Based Testing. No Source Code. Software Testing

Risk-Based Testing & Test-Driven Development

COMP 354 TDD and Refactoring

Software Design and Analysis CSCI 2040

EECS 4313 Software Engineering Testing

COURSE 11 DESIGN PATTERNS

Test-driven Development with Vulcan.NET and Visual Objects

Software Engineering I (02161)

Levels of Testing Testing Methods Test Driven Development JUnit. Testing. ENGI 5895: Software Design. Andrew Vardy

A few more things about Agile and SE. Could help in interviews, but don t try to bluff your way through

Practical Objects: Test Driven Software Development using JUnit

Test Driven Development (TDD)

Levels of Testing Testing Methods Test Driven Development JUnit. Testing. ENGI 5895: Software Design. Andrew Vardy

Chapter 15. Software Testing The assert Statement

Testing with JUnit 1

Scrums effects on software maintainability and usability

Test Driven Development. Software Engineering, DVGC18 Faculty of Economic Sciences, Communication and IT Tobias Pulls and Eivind Nordby

Software Engineering I (02161)

כללי Extreme Programming

Test Driven Development TDD

Software Engineering I (02161)

XP: Planning, coding and testing. Practice Planning game. Release Planning. User stories. Annika Silvervarg

Test suites Obviously you have to test your code to get it working in the first place You can do ad hoc testing (testing whatever occurs to you at

E xtr B e y CS R m oy 6704, e T a P n a Spring r n o d J g ia n 2002 r g a S m hu m ing

Testing and Debugging

XP Evolution Rachel Davies


Software Testing Prof. Meenakshi D Souza Department of Computer Science and Engineering International Institute of Information Technology, Bangalore

02161: Software Engineering I

Unit Testing and JUnit

Lab Exercise Test First using JUnit

2014 Intelliware Development Inc.

Chapter 8 Software Testing. Chapter 8 Software testing

CONFERENCE PROCEEDINGS QUALITY CONFERENCE. Conference Paper Excerpt from the 28TH ANNUAL SOFTWARE. October 18th 19th, 2010

Automated testing in Agile SW development

McCa!"s Triangle of Quality

Unit Testing In Python

So, You Want to Test Your Compiler?

Main concepts to be covered. Testing and Debugging. Code snippet of the day. Results. Testing Debugging Test automation Writing for maintainability

Test Driven Development (TDD), and Working with Legacy Code Using C# Workshop ( 4 days)

Outline. Logistics. Logistics. Principles of Software (CSCI 2600) Spring Logistics csci2600/

Software Engineering I (02161)

What's that? Why? Is one "better" than the other? Terminology. Comparison. Loop testing. Some experts (e.g. Pezze & Young) call it structural testing

JUnit 4 and Java EE 5 Better Testing by Design

Building in Quality: The Beauty of Behavior Driven Development (BDD) Larry Apke - Agile Coach

XP: Planning, coding and testing. Planning. Release planning. Release Planning. User stories. Release planning Step 1.

Story Writing Basics

Overview. State-of-the-Art. Relative cost of error correction. CS 619 Introduction to OO Design and Development. Testing.

Adopting Agile Practices

Test Driven Development and Refactoring. CSC 440/540: Software Engineering Slide #1

Software Engineering I (02161)

SOFTWARE LIFE-CYCLE MODELS 2.1

Automated Testing of Tableau Dashboards

Software Development Methodologies

Programming with GUTs

Question 1: What is a code walk-through, and how is it performed?

Testing in Agile Software Development

Project Automation. If it hurts, automate it! Jan Pool NioCAD University of Stellenbosch 19 March 2008

Program Correctness and Efficiency. Chapter 2

As a programmer, you know how easy it can be to get lost in the details

Introduction to User Stories. CSCI 5828: Foundations of Software Engineering Lecture 05 09/09/2014

JUnit A Cook's Tour. Kent Beck and Erich Gamma. Renaat Verbruggen 1#

Test-driven development

Want Better Software? TEST it! (and thenwrite it) Tame defects before they appear You Rise/Bugs Fall

Faculty of Engineering Kasetsart University

Improving Software Testability

Implementation of Process Networks in Java

Verification and Validation

Programming Embedded Systems

System Integration and Build Management

Software Quality. Richard Harris

Lecture 15 Software Testing

Test automation / JUnit. Building automatically repeatable test suites

Test automation Test automation / JUnit

C Programming. A quick introduction for embedded systems. Dr. Alun Moon UNN/CEIS. September 2008

DAT159 Refactoring (Introduction)

Objectives: On completion of this project the student should be able to:

Testing in an Agile Environment Understanding Testing role and techniques in an Agile development environment. Just enough, just in time!

Lessons Learned. Johnny Bigert, Ph.D., Skype/Microsoft October 26, 2011

Software Quality Assurance

2. Reasons for implementing clos-unit

Lecture 20: SW Testing Presented by: Mohammad El-Ramly, PhD

ArrayList. Introduction. java.util.arraylist

Chapter 9. Software Testing

A CONFUSED TESTER IN AGILE WORLD

1. Introduction and overview

Utilizing Fast Testing to Transform Java Development into an Agile, Quick Release, Low Risk Process

Chapter 10. Testing and Quality Assurance

Transcription:

Computer Science and Applications 1 (2013) 5-13 Aleksandar Bulajic and Radoslav Stojic The Faculty of Information Technology, Metropolitan University, Belgrade, 11000, Serbia Received: June 18, 2013 / Accepted: July 9, 2013 / Published: December 15, 2013. Abstract: Traditional testing methods are based on testing of already existing code and functionality, TDD (Test Driven Development) approach writes first a test and then implementation code. These papers describe TDD methodology by developing an example step by step, and commenting each of steps. TDD methodology claims that quality of delivered software is improved even a total time used for development is shorter., and cutting of development expenses and shortening development time, would not affect final software product quality. Research and example projects results differs significantly, what means that involved IT (Information Technology) professionals skills and experience can be a crucial factor, but there is no doubt that TDD approach delivers better test coverage of implementation code and introduces fewer software defects. However, this approach also uses approximately more than 15% time for development. Key words: Test DRIVEN Development, TDD, unit test, automated test, NUnit. 1. Introduction The test-first concept has been introduced in the Extreme Programming, an agile development methodology created by Kent Beck. Extreme Programming development method originate from the experience that Kent Beck collected by working as project manager 1996 on the Chrysler C3 project. In 1999 he wrote a book Extreme Programming Explained, where he presented this methodology that is combined from already known best practice principles. Kent Beck is also known as designer of the automatic unit test driving framework known as xunit framework, or today even better known as JUnit,automatic unit test framework, used by Java programmers or NUnit, automatic unit test framework, used by.microsoft,.net programmers (C# programmers for example). In 2003 Addison-Wesley published another Kent Beck book Test Driven Development by Example, Radoslav Stojic, associate professor, research fields: software engineering, computer games, flight simulators. Corresponding author: Aleksandar Bulajic, Ph.D. candidate, research field: information technology. E-mail: aleksandar.bulajic.1145@fit.edu.rs. that attracts more attention to the test-first concept. The basic idea behind the Test Driven Development is developing a test before implementation of requirement. The Section, Test Driven Development, describes this approach, in details. The next Test Driven Development Example includes a fully functional example developed in C#. The Section Conclusion is a short discussion and conclusion. All presented examples are developed by Visual Studio 2010 and C# and executed by using NUnit automated test framework. 2. Test Driven Development TDD (Test Driven Development) rules defined by Kent Beck [1] are very simple: (1) Never write a single line of code unless you have a failing automated test; (2) Eliminate duplications. The first principle is fundamental for TDD approach because this principle introduces technique where a developer first writes a test and then implementation code. Another important consequence of this rule is that

6 test development is driving implementation. Implemented requirements are by default testable; otherwise, it will not be possible to develop a test case. Second principle is today called a Refactoring, or improving a design of existing code. Refactoring also means implementing a modular design, encapsulation, and loose coupling, the most important principles of Object-Oriented Design, by continues code reorganization and without changing existing functionality. 2.1 Test Driven Development Work-Flow Fig. 1 illustrates a Test Driven Development process: 2.1.1 Requirement Requirements in case of the Extreme Programming and Agile development are managed differently than in a case of the traditional approach. Traditional approach can use a lot of time and huge communication overhead to specify all requirements details and requirements relations and interactions. Extreme programming splits development in the short iterations that can last from one to three or four weeks. Each iteration delivers fully functional software that adds new functionality or improves existing. A short iteration requires that requirements complexity is Fig. 1 Test Driven Development workflow diagram. reduced by dividing in the parts that can be completed in the short development time. A short iteration requires also that development starts very early, and there is no time to wait that whole requirement specification is completed. Many Agile and Extreme Programming sites are offering idealized pictures of the requirement management based on the user stories, but the time necessary to understand requirement cannot be avoided. This means that even it is possible to start development from the short user story, communication overhead related to the understanding of the requirement details cannot be avoid. The most used agile development and Extreme Programming argument why it is not necessary to wait for full requirement specification is that requirements are subject of changes. Development is the best way to validate requirements understanding. Supporters of Agile development often mentioned that requirement changes are welcome. Even requirements are often changed and changes cannot be avoided also in the short development projects, I am strongly in doubt that changes are welcome. Design of the development code is closely connected to requirements and each requirement changes could require more or less work to adapt existing solution. Always keep in mind that each methodology interest is to be used and implement improvements based on the collective experience and best practice. This means that especially methodologies that pretend on using attribute new will claim that such approach solves many old methodologies issues. Authors own experience is that new are very often improving an old methodology usage, but also introducing new issues and new complexities. However, traditional methodology approach or TDD methodology approach, both need to know about requirements in advance. The differences are in the time used for specifying requirements and also in the requirement size and complexity. TDD requires less time and starts implementation as soon as some parts of

7 requirements are known. Requirement understanding is validated during development and at the end of iteration. 2.1.2 Write Automated Test This is implementation of the first TDD rule: Never write a single line of code unless you have a failing automated test. This means that before any line of the requirement implementation code is written, an automated test code is written. Automated test will fail because there is not existing implementation code. Writing an automated test is not a spontaneous process. A test is chosen from a tests list that is already created during brainstorm sessions. This list contains a list of tests that verifies requirement and describes the completion criteria [2]. There are few important things that shall be remembered [2]: (1) The first test should not be the first test on the list. Developer is free to write code for any test on the list in any possible order; (2) The test list is not a static and adding or changing tests is welcome; (3) The test shall be as simple as possible. Developer should remember that Unit test is testing a particular class or a particular method inside of the same class; (4) The new test shall fail; otherwise there is no reason to develop a test. The rule number 3 is not always possible to respect because in the situation when there are involved database operations or Web services or GUI (Graphical User Interface) tests, a test implementation can be complex. 2.1.3 Execute Automated Test Because implementation code is not yet written and there is existing only empty implementation method to avoid compiler errors, a test will always fail. The automated testing tool is precondition to implement TDD. All tests or particular test shall be possible to execute automatically. This requirement is very important because developing small tests, step by step, can suddenly end by a test suite or test suites that contain hundreds of tests that shall be executed each time when implementation code is changed or new test is added to the test suite. This kind of testing is called Regression Test and this is very important step in the Quality Assurance process. Regression Test shall confirm that new introduced changes or new added functionality does not cause already existing functionality to fail. Executing each of these hundreds tests manually, when code is changed, would take too much time and never be possible, because the TDD methodology assumes frequent code changes. Today are mostly used code-driven testing frameworks that are known as xunit frameworks collection. These are open source solutions based on Kent Beck design of SUnit, Smalltalk test tool. Kent Beck and Erich Gamma ported SUnit to Java and called it JUnit. Different versions of this automated test framework are available for different programming languages, as for example NUnit for.net and CppUnit for C++. The most important advantage of these test frameworks is possibility to execute whole regression test suite at once by executing a class where all these tests are stored as separate methods. Each test can share common preconditions for successful test execution and in the xunit jargon this is called Test Fixture, as well as clean-up procedure that fallback changes and returns tests to the initial state. Assertions are functions that test a result of each test execution. There are available different assertion functions that can test positive results as well as failures. As input parameters to assertion function is sent expected result that is usually hard coded and actual result received after test execution. Result of this comparison is true or false and also the test result, success or failure. The results are presented visually, by red and green bar s that are changing color according to a test result. If only one test is executed and it executed

8 successfully there will be a green bar showing final result as success, otherwise there will be a red bar, showing a failed test. If there are executed more than one tests in the same test run, if only one test fails, the red bar will indicate that whole test run fails, even all other tests results were successful and produced internally green flags. This is important to remember during execution of the regression tests when all tests are executed at once. The red bar shows that new version introduced changes that break existing functionality. The testing is considered successful when all tests execution is successful and result is presented visually by green bar. This also means that each test design shall be independent of other tests execution results, what is not always easy job, especially if there are involved data stored in the external storages, as for example databases. Even Unit test is defined as test of the smallest possible testing unit and primary purpose is testing a method inside of the same class and is recommended to avoid using it to test database, interfaces or across components boundaries, in praxes is often Unit test frameworks used for any kind of testing. Complex tests can lead to a complex and obscure test case design. 2.1.4 Write Implementation Code If test fails then Write Implementation Code should provide implementation code that makes a corresponding test case to execute successfully. The TDD considers a passing test not useful and unnecessary. New test shall always fail and cause changes in the implementation code. These changes can introduce writing of new implementation code or changes in existing code. The TDD recommends initially writing of very simple implementation code and keeping of code design as simple as possible. This means that implementation code shall be as simple as it is enough to produce successful execution of corresponding test case. Newkirk and Vorontsov [2] describe a set of check points that best describes recommendation that implementation code should do no more and no less: The code is appropriate for intended audience; The code passes all the tests; The code communicates everything it needs to; The code has the smallest number of classes; The code has the smallest number of methods. The highest priority according to authors is the first criteria that code is appropriate for audience and the next priority is that code passes all the tests and then the third priority is code communication. Implementation of recommendations such as source code should be simple, sufficient and elegant is always difficult to understand and very difficult to achieve. The simple reason is that a code is still mostly written by human beings, as well as solutions are designed by human beings. This means that different people will probably design and solve even most simple requirement differently. They will use different names for classes, methods, parameters and variables, as well as different structures and data types. They would probably implement an error handling differently too. The following is my own list of recommendations to write a code: Document method purpose and parameters by writing sufficient comments; Document each variable purpose and algorithms by writing sufficient comments; Write code as simple as possible and always keep in mind that someone other would probably maintain your code; Implement error handling and report properly (log) any kind of error; If used a pattern, document which pattern you believe a current code belongs. Properly documenting code is very important even in case if it is your own code that is not supposed to be maintained by other people. After some time we are going to forget code purpose and we cannot understand it easy why we wrote that piece of code. Simplicity means that developer shall avoid complex

9 and non standard solution if it is not necessary, and assume that these who would maintain a code probably do not have the same knowledge and experiences as code initial author. Implementation of error handling makes code robust. If code fails or cannot satisfy required functionality, message should report a reason to a caller. 2.1.5 1Refactoring Although refactoring code has been done informally for years, William Opdyke s 1992 Ph.D. dissertation [3].is the first known paper to specifically examine refactoring, [4].although all the theory and machinery have long been available as program transformation systems [5]. Removing duplicate code today is better known as code refactoring. Code refactoring has been widely popular in recent years and very popular book written by Martin Fowler book Refactoring Improving the Design of Existing Code [4] still can be used as a reference book that contains a long list of refactoring techniques. There are available different definitions and all agree that refactoring is improving design of existing code. Refactoring does not add new functionality or change old one. Refactoring eliminates duplicates, makes code more readable and improves code internal structure. Refactoring process means refactoring of a test case code and a refactoring of implementation code. As well as implementation, code can introduce duplicates, so can test case too. Test case initializations, preconditions and clean-up procedures could be shared between more test cases. This is a good start point and moving these code lines in separate and shared methods would eliminate code duplication. Test case specific requirements can make code complex, and if not handled properly, this code can introduce too many switch or if statements and code branches, to eliminate duplicate code. In this case is important to keep common sense and do not make code more complex then it is necessary. Refactoring process would probably start when there are more test cases available. It depends of the test cases and source code quality, as well as of the developer skills and experiences. 2.2 When to Stop TDD Iterations If all tests are green (successfully executed) then on the Figure: Test Driven Development workflow diagram are available three paths: (1) Refactoring; (2) Write Automated Test; (3) Stop Further Development. The first two are described in previous Sections. The third possibility means that application development is completed. Some authors are calling a Red, Green and Refactor sequence a TDD pattern. A decision to stop further development can be made when all test cases from a test case list are implemented, and all tests passed, and all test code is covered by implementation code. This means that all requirements are implemented and there is no reason to use more time and resources. However, by inspecting source code it is probably possible to find a way to improve it, and make it more robust, report errors better and more understandable. When to stop depends of the available time and resources. If there is more time, it can be used to develop more test cases or to document code by inserting comments and editing existing comments. If time is a critical resource, then green bar after whole test suite execution is a best indication to move forward and declare application ready. 3. TDD example This example is inspired by James W. Newkirk and Alexei A. Vorontsov book, Test-Driven Development in Microsoft.NET, [2]. Reason why we choose to use and adapt this example is simple. Kent Beck and Martin Fowler, as well as other computer science experts were part of a team that reviewed this book and wrote foreword or were part of a book technical team. Example in these papers correct a small error from this

10 book example as well as it recommends different solution. Code, implementation and refactoring of test cases differ too. This example is an implementation of a stack structure that can store and return objects by using First In, First Out, (FIFO) approach. The very first step after understanding a requirement is to make a list of test cases. This is just a brainstorming process and test cases are written down without sorting it in any particular order. For example this list can contain following test cases: (1) Push one object on empty Stack and then pull an object from a Stack. Verify that pulled object is equal to pushed object; (2) Push three objects on empty Stack and then pull all three objects from a Stack, one by one. Verify that objects are pulled in correct order (FIFO); (3) Create an empty Stack. Verify that a Stack is empty; (4) Push an object on a Stack. Verify that a Stack is not empty; (5) Remove all elements from a Stack and verify that Stack is empty. And so on, and so on. Refer to 2.1.2 Write Automated Test Section for a recommendation rules for writing automated test list. It is important remember that this list is not a static and can be extended by additional tests any time, and order of implementation of any of test from this list does not depends of any place where test is written down. Developer decides which next test will be implemented. I decided to start implementation by test number 3, verification that Stack is empty. I had created TestStack class that is a test class used for testing of a Stack class where a Stack implementation code is: [TestFixture] class TestStack public void isempty() Stack stack = new Stack(); Console.WriteLine("Test: isempty()\n"); Assert.IsTrue (stack.isempty()); Build of this code will fail because a Stack class does not exist yet and compiler issues an error. We need to implement class and according to TDD methodology, implement as little as possible: /* TDD Example Stack class isempty method returns false */ return false; Now we can build and execute a test. NUnit 2.6 framework is used to execute this test: Fig. 2 shows that test fails and this is a good sign according to TDD approach. Now I am going to make this test execute successfully and implement just enough code to satisfy it: /* TDD Example Stack class is Empty method returns true */ return true; I am going again to test it again and now test executed successfully and green bar just confirms that test case received expected result: Fig. 3 shows that test has been executed successfully. If followed a TDD pattern Red-Green-Refactor pattern, now is time for code refactoring. Even whole code looks very simple; it is possible to improve design. For

11 Fig. 2 TDD Example NUnit 2.6 isempty() test failed. Fig. 3 TDD Example NUnit 2.6 isempty() test success. example remove hard coded true and false bool values and replace by a variable in the Stack class. We can also decide to develop another test, for example test number 4 on the test list, Push an object on a Stack. Verify that a Stack is not empty : Following is updated TestStack class code and add a new test case pushone(): /* TDD Example TestStack class pushone method returns false */ [TestFixture] class TestStack public void isempty() Stack stack = new Stack(); Console.WriteLine("Test: isempty()\n"); Assert.IsTrue (stack.isempty()); public void pushone() // create Stack object Stack stack = new Stack(); // create data on Stack stack.push("first element"); // test expected result Assert.IsFalse(isEmpty, "After push isempty shall be false"); Change in the TestStack class requires a minimal code implementation in the class: /* TDD Example TestStack class pushone empty method */ return true; public void push(object element) I am going to test again by using NUnit 2.6 tools and I got red bar and error message: After push isempty shall be false Expected: false; But was: True; Now is time to implement solution that makes test case execute successfully. I created a stack object as an instance of ArrayList class and add code to the push method: /* Figure: TDD Example TestStack class pushone method implemented */ ArrayList stack = new ArrayList();

12 return true; public void push(object element) stack.add(element); Now is time to execute a test again: Fig. 4 shows that test case failed again even Stack should not be empty. In this case it is easy to understand that test case failed because isempty() method returns always the same hard-coded value, We are going to change implementation code again: /* TDD Example updated isempty() method */ ArrayList stack = new ArrayList(); if (stack.count > 0) return false; else return true; public void push(object element) stack.add(element); Fig. 4 TDD Example NUnit 2.6 ipushone() test failed again. After executing test again all test cases are working properly and there is a green bar confirming that test cases received expected results. Now is time for code refactoring and we will start from the TestStack class. In this class is very easy notice that stack = new Stack(); code is duplicate and I will remove these duplicates: /* TDD Example Refactoring TestStack class */ [TestFixture] class TestStack Stack stack; public void isempty() stack = new Stack(); Assert.IsTrue (stack.isempty()); public void pushone () // create Stack object stack = new Stack(); // create data on Stack stack.push("first element"); bool isempty = stack.isempty(); // test expected result Assert.IsFalse(isEmpty, "After push isempty shall be false"); Why I just moved Stack stack; declaration from a test cases but did not create a common Stack object by Stack stack = new Stack();? Answer is, if I did, it can introduce potentially error-prone situation where test case can fail if test cases order of execution is changed. If object is common for both cases then if isempty() test case is executed after pushone() test case will fail, because Stack object will not be empty and there will be at least one object on the Stack.

13 This will break another best practice rule. Execution of each test case shall be independent of execution of any other test case. You can also make a less perfect rule by adding inside of the same test suite. What about Stack class? I am going to remove if statement from isempty() method and remove also hard coded true and false values: Here is result: /* TDD Example Refactoring Stack class */ ArrayList stack = new ArrayList(); return!(stack.count > 0); public void push(object element) stack.add(element); What I like is that hard-coded values and if statement disappeared and there is just one line of code. What I do not like is not (exclamation mark) used in the return statement. This can be a source of future errors because human being brain could be confused by logical statements where not is used. 4. Conclusions This example shows how implementation is driven by test cases development. This approach without any doubts provides better test coverage of implementation code. Continues code refactoring and eliminating of duplicate code requires continues source code inspection and changes that should improve internal code design. Developer learns about code internal structure and becomes confident to make changes. If changes are necessary, than developer impact analysis and estimations are in this case much more reliable. However, claim that TDD improves internal software design and makes further changes and maintenance easier according to research projects cannot be confirmed. It seems that design primary depends of developer skills and experience, as well as of implementation of best practice and internal standards Short development iterations discover requirements misunderstandings very early and reduce expenses for correcting these misunderstandings. This approach raised very important question about final software product quality, especially because a TDD approach claims that total time used for development is shorter. This means that cutting of development expenses and shortening development time, would not affect final software product quality. Test-first approach leads to development of a small portion of code and errors are discovered quickly without using debugger. Implementations code has fewer defects what means that more time can be used for development of new features and functionalities. Conclusion is that TDD approach provides better test coverage of implementation code and introduces fewer software defects, what means that delivered software has better quality, than software delivered by traditional approach. Claim that TDD approach is using the same amount or less time for project development cannot be confirmed and according to research papers TDD approach uses approximately 15% more time for development Test code needs to be maintain, as all other source code, and in case of requirement changes it can be daunting task to update huge number of test cases. References [1]. K. Beck, Test Driven Development by Example, Addison-Wesley professional, 2002. [2]. J.W. Newkirk, A.A. Vorontsov, Test-Driven Development in Microsoft.NET, Microsoft Press, 2004. [3]. W.F. Opdyke, http://st.cs.uiuc.edu/pub/papers/ refactoring/opdyke-thesis.ps.z, 1992. [4]. M. Fowler, Refactoring Improving the Design of Existing Code, Addison-Wesley, 1999. [5]. Wikipedia_Code_Refactoring, Code refactoring, http://en.wikipedia.org/wiki/code_refactoring, 2011.