KNOWLEDGE SUBSYSTEM S INTEGRATION INTO MDA BASED FORWARD AND REVERSE IS ENGINEERING

Size: px
Start display at page:

Download "KNOWLEDGE SUBSYSTEM S INTEGRATION INTO MDA BASED FORWARD AND REVERSE IS ENGINEERING"

Transcription

1 KNOWLEDGE SUBSYSTEM S INTEGRATION INTO MDA BASED FORWARD AND REVERSE IS ENGINEERING Audrius Lopata 1,2, Martas Ambraziunas 3 1 Kaunas University of Technology, Department of Information Systems, Studentu st. 50, Kaunas, Lithuania, Audrius.Lopata@ktu.lt 2 Vilnius University, Kaunas Faculty of Humanities, Muitines St. 8, Kaunas, Lithuania, Audrius.Lopata@ktu.lt 3 Valdoware Inc., Martas.Ambraziunas@gmail.com Abstract. In 2001 OMG presented MDA (Model Driven Architecture) approach which specifies the appliance of system models in the software development life cycle. Improvement of MDA by Enterprise Knowledge subsystem which composition is based on the best practices of the enterprise modeling standards will reduce risk of project failures caused by inconsistent user requirements and insufficient problem domain knowledge verification against Enterprise Meta-Model internal structure. Proposal of such MDA improvement by Knowledge-Based subsystem is discussed in this article. Keywords: Enterprise Knowledge-Based Information System Engineering, Model Driven Architecture, Enterprise Model, Enterprise Meta-Model. 1 Introduction The majority of IT project failures (about 68% [2]) are caused by inconsistent user requirements and insufficient problem domain analysis. Although new methods of information systems engineering (ISE) are being researched and developed, they are empirical in nature: the project models repository of CASE system is composed on the basis of enterprise problem domain. The problem domain knowledge acquisition process relies heavily on the analyst and user; therefore it is not clear whether the knowledge of the problem domain is adequate. The expert plays a pivotal role in the problem domain knowledge acquisition process, and few formalized methods of knowledge acquisition control are taken into consideration. The knowledge stored in repository of CASE tool is not verified through formalized criteria, thus it is necessary to use advanced data capture techniques that ensure iterative knowledge acquisition process during which missing or incorrect data elements are obtained and fixed according to the enterprise Meta- Model. Despite of existing tools and CASE systems, requirement analysis largely depends on the expertise of system analyst and the user. OMG provides Model Driven Architecture (MDA) approach to information systems engineering where MDA focuses on functional requirements and system architecture not on technical details only [4]. Model Driven Architecture allows long-term flexibility of implementation, integration, maintenance, testing and simulation. It means that enterprise modeling and user requirements engineering stages of information system engineering life cycle are not covered enough yet. There is lack of formalized problem domain knowledge management and user requirements acquisition techniques for composition and verification of computation independent model (CIM) specified in the MDA approach. This approach can be enhanced with knowledge subsystem, which will ensure CIM verification against formal criteria defined by Enterprise Meta- Model (EMM). Exist various standards like CEN EN 12204[5], CEN EN (CIMOSA)[6], UEML[7], PSM which specifies requirements for EMM internal structure. EMM provides components for construction of Enterprise Model (EM) like: function, activity, process, resource, actor, goal, business rules etc. Improvement of MDA by knowledge subsystem which composition is based on best practices of the standards mentioned above will reduce risk of project failures caused by inconsistent user requirements and insufficient problem domain knowledge verification against EMM internal structure. Proposal of such MDA improvement by Knowledge Subsystem s integration into MDA based forward and reverse IS engineering is discussed in this article. 2 Knowledge- Based MDA approach Most of MDA related techniques are based on empirically collected problem domain knowledge thus negative influences validation of user requirements specification against actual customer needs. In some cases, user requirements do not correspond to formal business process definition criteria, which have a negative impact on the next stage of information system engineering process. This problem can be solved by implementing Control theory [8] based Knowledge subsystem (which includes EMM and EM) to particular MDA techniques

2 2.1 Knowledge-Based information systems engineering The Enterprise Knowledge-Based subsystem consists of two parts: the Enterprise Meta Model (EMM) and the Enterprise Model (EM). The EMM regulates the formation order of the EM. The EMM defines the composition of computerized problem domain knowledge, which is necessary for creating project models and generating the programming code. The problem domain EM is formed by the user and the analyst according to EMM constraints. In order to solve this problem a particular method [9] has been developed in Kaunas University of Technology, Department of Information Systems. It is based on Control theory and best practices of the UEML, ENV 12204, ENV 40003, WFMC TC standards. The conceptual scheme of Knowledge-Based subsystem integration in ISE lifecycle is presented in Figure 1. Figure 1. Role of Knowledge- Based subsystem in ISE life cycle 2.2 MDA Approach In 2001 OMG presented MDA (Model Driven Architecture) approach which specifies the appliance of system models in the software development life cycle. A model of a system is a description or specification of that system and its environment for some certain purpose. A model is often presented as a combination of drawings and text [4]. The main concept of MDA is to separate the specification of system functionality from the specification of the implementation of that functionality on a specific technology platform [4] ( What to do from How to do). Conceptual MDA structure is presented in Figure 2. Figure 2. Conceptual MDA structure OMG defines the following key points of MDA: Definition of Computation Independent Model (CIM) which specifies system requirements of a particular problem domain (it can also be named Business Model); Transformation of CIM to Platform Independent Model (PIM). User requirements specifications will be converted to systems architecture components and functionality methods during this process;

3 Transformation of PIM to Platform Specific Model (PSM) where abstract system model ( What to do) is upgraded with targeted platform specific information ( How to do). PIM provides system s architecture and functionality without platform specific information and technical details. PSM is constructed on the basis of PIM enhancing it with platform specific details, i. e. implementation and deployment information. Transformation of PSM to a particular platform programming (for example: Java, C# etc.) as well as to other artifacts, such as executable files, direct link libraries, user documentation etc. Furthermore, the above described transformations can be performed backward using reversed engineering. Three different techniques can be applied to perform the following transformations: Manual: The system analyst creates and studies the composition of all types of the defined MDA models and manually performs all the necessary transformations. Semi-Automatic: The system analyst uses analysis and design tools that allow perform model creation and transformation process more efficiently. Automatic: The transformation tool completes the transformation process without the system analyst s interference 2.3 Knowledge-Based MDA IS Engineering According to survey [1], leading MDA-Based ISE methodologies need improvement in the following areas: requirements engineering, CIM construction, system models validation and verification against problem domain processes as well as most of the methodologies [1] do not provide sufficient information which MDA tools are most efficient with a particular methodology. These problems can be solved by enhancing the MDA approach with Knowledge-Based subsystem, which is not MDA compatible currently. This subsystem is able to handle validation of EM against EMM procedure. EMM ensures completeness and consistency of EM, which is created on the basis of CIM (during forward engineering) or PIM (during reverse engineering). Conceptual MDA structure enhanced with Knowledge-Based subsystem is presented in Figure 3. below. Figure 3. Conceptual MDA structure enhanced with Knowledge- Based subsystem The main steps of forward and reverse MDA IS engineering enhanced by EM and EMM are discussed Principles of Knowledge- Based MDA Forward IS Engineering EM construction requires formal CIM structure. Although the existing numerous techniques [3] describe CIM construction procedures, most of them are not formalized enough that influences negative impact on the EM constructs and composition. Use of modified workflow diagrams [10] can solve such shortcomings and properly support the suggested method. XMI compatible third party tools are able to use Knowledge- Based subsystem s data for transformation between particular MDA models. It ensures the availability of wide range of development alternatives for MDA models transformations. The set of modified workflow models [11] can be used for CIM construction. When this model is constructed, iterative CIM based EM verification against EMM process is started and is repeated until all the incorrect or missing EM s knowledge is updated and corresponds to internal structure of EMM. The process leads to the creation of the consistent EM, which will be realized as a relational or object oriented database. The next step is the transformation of EM to PIM. The result of this transformation conforms to XMI standard that third party tools can use this model for the next stages of MDA ISE life cycle (PSM and Code

4 generation). Detailed workflow of forward engineering is presented in Fig. 4 as steps 4-10, which are described in Table Principles of Knowledge- Based MDA Reverse IS Engineering Reverse engineering starts as usual from Code (working software) transformation to PSM. This process is performed by transformation tool. Particular MDA compatible tool performs PSM to PIM transformation process, removing or transforming platform related constructs to higher abstraction (PIM) level in next step. Knowledge-Based subsystem handles PIM transformation to EM process. The final reverse engineering result is EM which is consistent with analyzed IS. At this point EM can be used for two main purposes: specification and analysis of information system architecture from Control Theory [8] view or improvement of existing IS by updating problem domain knowledge that will start forward engineering process. Detailed workflow of reverse engineering (steps 1-3) including forward engineering (steps 4-10) is presented in Fig. 4 as well as description in Table 1. Figure 4. Main steps of Knowledge-Based MDA approach The following types of actors are specified in Knowledge-Based MDA approach: system analyst, Knowledge- Based subsystem and transformation tool. By default model transformations are performed automatically without the system analyst s interference. Detailed description of main steps of Knowledge-Based MDA approach is presented in Table

5 Table 1. Detailed description of main steps of Knowledge- Based MDA approach STEP NAME ACTOR STEP DESCRIPTION RESULT Particular MDA tool performs transformation from 1. Code to PSM Transformation programming code (as well as to other artifacts such as PSM transformation tool executable files, direct link libraries) to PSM 2.PSM to PIM transformation 3.PIM to EM transformation 4.EM verification against EMM 5.Analysis of Verification report 6.CIM construction for particular problem domain 7.EM construction from CIM 8.Transformatio n from EM to PIM 9.Transformatio n from PIM to PSM 10.Transformati on from PSM to CODE Transformation tool Knowledge- Based subsystem Knowledge- Based subsystem System analyst System analyst System analyst Knowledge- Based subsystem Transformation tool Transformation tool Particular MDA tool performs PSM to PIM transformation process, removing or transforming platform related constructs to higher abstraction (PIM) level. Knowledge- Based subsystem transforms PIM (which basically consists of UML based models) to EM of particular problem domain. EM is verificated against Control Theory based EMM internal structure. Missing or incorrect EM data elements that do not correspond to EMM internal structure are determined during this step. System analyst evaluates Verification report and approves transformation from EM to PIM process in case of success or defines necessary actions in order to solve EM inconsistency against EMM internal structure issue in case of failure. Problem domain knowledge acquisition and CIM composition are performed at this step. This step is empirical in nature thus heavily depends on the system analyst s experience and qualification. CIM using semi-automatic technique are transformed to EM. XMI standard compatible PIM is constructed according to EM knowledge. It ensures PIM conformation to EMM defined formal constraints. Particular MDA tool performs transformation from PIM to PSM, adding to PIM platform specific information. Particular MDA tool performs transformation from PSM to programming code as well as to other artifacts such as executable files, direct link libraries, user documentation etc. PIM EM Verification report Identification of insufficient problem domain knowledge CIM EM PIM PSM CODE 3 Conclusions Knowledge-Based subsystem, which improves traditional MDA conception with best practices of problem domain knowledge and user requirements acquisition methods, is presented in this article. It ensures the problem domain knowledge verification against EMM internal structure. The EMM is intended to be formal structure and set of business rules aimed to integrate the domain knowledge for the IS engineering needs. The EMM is used as the normalized knowledge architecture to control the process of construction of an EM for the particular business domain. Some work in this area has already been done [12], [13], [14]. The EM is used as the main source of Enterprise knowledge for discussed MDA approach. Improvement of MDA by Knowledge-Based subsystem will reduce risk of project failures caused by inconsistent user requirements and insufficient problem domain knowledge as well as allows enhancement of existing system by using the reverse engineering principles. References [1] Asadi, M., Ramsin, R MDA-based Methodologies: An Analytic Survey. Proceedings of the 4th European conference on Model Driven Architecture: Foundations and Applications. pp , Berlin (2008) [2] Ellis K. The Impact of Business Requirements on the Success of Technology Projects. Benchmark, IAG Consulting (2008) [3] Ambler, S W. Agile Modeling,

6 [4] OMG. MDA Guide Version 1.0.1, [5] ENV Advanced Manufacturing Technology Systems Architecture - Constructs for Enterprise Modelling. CEN TC 310/WG1 (1996). [6] ENV Computer Integrated Manufacturing Systems Architecture - Framework for Enterprise Modelling, CEN/CENELEC (1990). [7] Vernadat F. UEML: Towards a Unified Enterprise modelling language. Proceedings of International Conference on Industrial Systems Design, Analysis and Management (MOSIM 01), Troyes, France, /27, [8] Gupta, M.M., Sinha, N. K. Intelligent Control Systems: Theory and Applications. The Institute of Electrical and Electronic Engineers Inc., New York [9] Gudas S., Lopata A., Skersys T. Approach to Enterprise Modelling for Information Systems Engineering. INFORMATICA, Vol. 16, No. 2, Institute of Mathematics and Informatics, Vilnius, 2005, pp , 2005 [10] Lopata A. Gudas S. Enterprise model based computerized specification method of user functional requirements - International conference 20th EURO mini conference Continuous optimization and Knowledge-based Technologies (EuroOpt-2008), May 20-23, 2008, Neringa, Lithuania, p ISBN [11] Lopata A. Gudas S. Workflow- Based Acquisition and Specification of Functional Requirements. Proceedings of 15th International Conference on Information and Software Technologies IT2009, Kaunas, Lithuania April p ISSN [12] Kapocius K., Butleris R. Repository for business rules based IS requirements. Informatica. ISSN , Vol. 17, no. 4. p [13] Silingas D., Butleris R. UML-intensive framework for modeling software requirements. Information Technologies' 2008 : proceedings of the 14th International Conference on Information and Software Technologies, IT 2008, Kaunas, Lithuania, April 24-25, 2008 / Kaunas University of Technology. ISSN p [14] Gudas S., Pakalnickas E. Enterprise Mangement view based Specification of Buisness Components. Proceedings of 15-th international conference on Information and software technologies, IT 2009, Kaunas, Technologija, 2009, p , 2009 ISSN

7 IMPLEMENTATION OF EXTENSIBLE FLOWCHARTING SOFTWARE USING MICROSOFT DSL TOOLS Mikas Binkis, Tomas Blazauskas Kaunas University of Technology, Department of Software Engineering, Studentu str A, Kaunas, Lithuania, Abstract. Currently there are commercial and freeware tools that allow users to specify algorithms using visual flowcharting software. These tools are widely used for many purposes from visual programming to specification of common actions, yet they usually can't be extended with additional elements and have limited execution capabilities. Our goal is to choose a flexible implementation platform and create extensible executable flowchart system that can be used to teach students programming. In this article we present a flowchart metamodel created with Microsoft DSL Tools a graphical designer for domain models with a set of code generators. Keywords: flowcharting, domain specific languages, Microsoft DSL Tools, visual programming, MDA, extensible flowcharts. 1 Introduction Flowchart is graphical representation of a process or the step-by-step solution of a problem, using suitably annotated geometric figures connected by flowlines for the purpose of designing or documenting a process or program [1]. Because of the rather simple and comprehensible notation, flowcharts are widely adopted as one of the means of teaching algorithms and programming. Although computer science uses a variety of flowcharts, new software, dedicated to flowcharting, have offered extensibility. Some programs have combined flowcharting capabilities with programming and allow users to transform their diagrams into specific programming code. Microsoft Visual Studio add-on Microsoft DSL Tools is an interesting case, since it allows to graphically specify the metamodel of a diagram (or in this case flowchart) and create a model editor. This means that it is possible to modify flowchart models by adding custom elements, which extends the usability of flowcharts in almost every imaginable field. Another advantage of the DSL approach is that the modelling environment can constrain and validate a created model for the domain's semantics, something that is not possible with UML profiles [2]. The model editor, created with DSL, can be used to transform graphical notation into any type of programming code from simple XML notation to a working program, thus creating an executable flowchart solution. This gives an advantage over commonly used flowcharting software, that usually offers only limited code transformation options and does not provide other than default executable environment. In this article we will briefly review some of the existing flowcharting software, analyse main properties of domain specific languages and propose our prototype flowchart engine, based on Microsoft DSL Tools for Visual Studio Background 2.1 Flowcharts in learning process The human ability to realize graphic representations faster than textual representations led to the idea of using graphical artifacts to describe the behaviour of algorithms to learners that have been identified as algorithm visualization [3]. Visual programs based on flowcharts allow students to visualize how programs work and develop algorithms in a more intuitive fashion. The flow model greatly reduces syntactic complexity, allowing students to focus on solving the problem instead of finding missing semicolons [4]. Flowcharts complement other learning methodologies. One of the examples is Algorithm Visualization using Serious Games (AVuSG) an algorithm learning and visualization approach that uses serious computer games to teach algorithms [5]. Open source, simple structured flowcharts, presented in widely acceptable standard formats may also contribute to the expansion of collective creativity. It is an approach of creative activity that emerges from the collaboration and contribution of many individuals so that new forms of innovative and expressive art forms are produced collectively by individuals connected by the network [6]. This way flowcharts may be used as a learning medium to transfer and exchange both simple and complex algorithms or scenarios for various frameworks, that utilize flowcharts

8 2.2 Existing flowcharting solutions RAPTOR is an open source iconic programming environment, designed specifically to help students visualize classes and methods and limit syntactic complexity. RAPTOR programs are created visually using a combination of UML and flowcharts. The resulting programs can be executed visually within the environment and converted to Java [7]. The Iconic Programmer is an interactive tool that allows programs to be developed in the form of flowcharts through a graphical and menu-based interface. When complete (or at any point during development), the flowchart programs can be executed by stepping through the flowchart components one at a time. Each of these components represents a sequence, a branch, or a loop, so their execution is a completely accurate depiction of how a structured program operates. To solidify the concept that flowcharts are real programs, the developed flowcharts can also be converted into Java or Turing (present capability), or and other high-level language (easily extendable) [8]. Ionic Programmer supports input / output, selection, looping, and code generation, but does not support subprograms. The SFC (Structured Flow Chart) Editor is a graphical algorithm development tool for both beginning and advanced programmers. The SFC Editor differs from other flowchart creation software because its focus is on the design of flowcharts for structured programs; using a building block approach, the graphical components of a flowchart are automatically connected and structured pseudo-code is simultaneously generated for each flowchart. While SFC was originally designed as a tool for beginning to intermediate programmers, it has been used by students in upper level classes and by professional system designers [9]. Visual Logic [10]provides a minimal-syntax introduction to essential programming concepts including variables, input, assignment, output, conditions, loops, procedures, arrays and files. Like other flowchart editors, Visual Logic supports visual execution and stepping through the elements of a diagram. The language contains some built-in functions from Visual Basic, yet it does not support the creation of classes. 2.3 Domain specific languages Domain specific language (DSL) is a language designed to be useful for a delimited set of tasks, in contrast to general-purpose languages that are supposed to be useful for much more generic tasks, crossing multiple application domains. [11]. A key benefit of using a DSL is the isolation of accidental complexities typically required in the implementation phase (i.e., the solution space) such that a programmer can focus on the key abstractions of the problem space [12]. Other benefits of DSLs include [13]: DSLs allow solutions to be expressed in the idiom and at the level of abstraction of the problem domain. Consequently, domain experts themselves can understand, validate, modify, and often even develop DSL programs. DSL programs are concise, self-documenting to a large extent, and can be reused for different purposes [14]. DSLs enhance productivity, reliability, maintainability [15, 16], and portability [17]. DSLs embody domain knowledge, and thus enable the conservation and reuse of this knowledge. DSLs allow validation and optimization at the domain level [18, 19, 20]. DSLs improve testability following approaches such as [21]. A graphical domain-specific language must include the following features [22]: Notation a domain-specific language must have a reasonably small set of elements that can be easily defined and extended to represent domain-specific constructs. Domain Model a domain-specific language must combine the set of elements and the relationships between them into a coherent grammar. It must also define whether combinations of elements and relationships are valid. Artifact Generation one of the main purposes of a domain-specific language is to generate an artifact, for example, source code, an XML file, or some other usable data. Serialization a domain-specific language must be persisted in some form that can be edited, saved, closed, and reloaded. A domain-specific language is defined by its domain model. The domain model includes the domain classes and domain relationships that form the basis of the domain-specific language. The domain model is not the same as a model. The domain model is the design-time representation of the domain-specific language, while the model is the run-time instantiation of the domain-specific language. Domain classes are used to create the various elements in the domain, and domain relationships are the links between the elements. They are the design-time representation of the elements and links that will be instantiated by the users of the design-specific language when they create their models [23]

9 Despite the benefits offered by DSLs, there are several limitations that hamper widespread adoption. Many DSLs are missing even basic tools such as debuggers, testing engines, and profilers. The lack of tool support can lead to leaky abstractions and frustration on the part of the DSL user [24]. We have chosen Domain Specific Language Tools for Microsoft Visual Studio 2008 for our research, since it provides robust development environment, adequate debugging mechanisms, good extensibility options (it is possible to add custom elements to the flowchart model),.net framework support and sufficient documentation. There is also an approach of domain specific language creation by using UML profiles [25]. Instead of heavyweight metamodeling, the developer can create a full-featured DSL, based on a UML profile and its customization. One of the tools, implementing this approach, is MagicDraw. Although it s possible to validate models and generate code from them, surveys show [25] that the tool lacks customization flexibility (e.g. creation of a new symbol, which has nothing in common with existing UML symbols, inability of changing default UML metamodel values etc.). 3 Prototype flowcharting tool 3.1 Proposed solution The main goal of our work was to create a customizable visual flowcharting tool with capabilities of generating flowchart definition code that could be utilized by other software (Figure 1). We have chosen a simple flowchart version with most common flow diagram symbols and created a metamodel (detailed in section 3.2) using Microsoft DSL Tools. The metamodel was compiled into an IDE that can be used to draw flowcharts and convert them into customizable XML code. The XML code is used by Adobe Flex application, which visually represents the flow algorithm. Figure 1. From metamodel to code It is important to note that Microsoft DSL automatically creates XML code from both metamodels and models (via implemented serialization functions), yet the generation lacks flexibility usually even little changes in XML structure requires considerable modifications of the generator. That s why we chose a much faster and easier solution and created our own XML generator (detailed in section 3.3). 3.2 The metamodel of flowchart Microsoft DSL Tools for Visual Studio IDE can be used to create virtually any type of diagram metamodel by specifying it s object hierarchy, relationships between classes and representation of model objects. The main elements of the metamodel are domain classes (with optional domain properties), that can be connected among themselves with three types of relationships [26]: Inheritance a relationship between a base class and a derived class. Displayed as a line that has a hollow arrow that points from the derived class to the base class. Embedding a containment relationship. Displayed as a solid line in the diagram. Reference a relationship between two domain classes that do not have an embedding relationship. Displayed as a dashed line on the diagram. Every domain relationship has roles (source / target) and multiplicity (specifies how many elements can have the same role in a domain relationship). As it is shown in Figure 2, our flowchart model consists of the most common flowchart symbols Start, End, Decision, Action, Input, Output and Subdiagram. Every symbol has identification number (provided by IDE) and a name. Symbols, that require user input in a model, have property Value, which, if necessary, can be validated by custom criteria. Connection between diagram elements is called FlowConnection and has a property Condition, used in conjugation with Decision symbol

10 Figure 2. Domain specific language definition for flowchart model 3.3 Code generator The generator of code is based on text template mechanism, provided by IDE, and therefore enables to transform model into any type of programming code. We have chosen XML, as it is most versatile and is best suited to achieve our goals. It is also important to note, that code generator is not limited by any output language, since by using Microsoft Visual Studio text templates it is possible to transform the model into any type of programming code or formal notation like GraphML. While generated IDE with the mentioned code translator might seem like a all-round platform independent solution, the IDE itself is mainly restricted to operation in Windows family operating systems. Some sources indicate, that this may be only a temporary inconvenience, as Microsoft has recently acquired Teamprise, a division of SourceGear that built tools to give developers access to Visual Studio 2008 Team Foundation Server from systems running Linux, Mac OS X and Unix [27]. To illustrate our solution, we present a simple example of a program (Figure 3), that asks for a numerical input, checks if the given number is less than 3, increases the number if it s not (simple illustration of a cycle) and outputs the number. Because of the text amount limitations in the article, we present only the generated XML code. Every element is specified by name (caption) and identification code. Additional attributes, such as Type and Color have been added to adapt the code that is used by an animated model representation application (detailed in section 3.4). We are planning to dedicate our upcoming articles to more detailed examples that will include more elaborate cases

11 Figure 3. Example of a simple program, in our flowcharting environment <Flowdiagram Caption="Diagram1" Id="4f5af4e7-4fe3-5a7i-018ca-48b3n7c01a32"> <Start Caption="Start1" Id="a0014f1d-36a4-4dd6-97bf-f35c010a81ca" Type="Circle" Color="Black"> <Target Caption="Input1" Id="6a63a117-e b-99d2-18a35ae9c01b" /> </Start> <Input Caption="Input1" Id="6a63a117-e b-99d2-18a35ae9c01b" Type="ReverseParallelogram" Color="Green"> <InputValue> a </InputValue> <Source Caption="Start1" Id="a0014f1d-36a4-4dd6-97bf-f35c010a81ca" /> <Target Caption="Decision1" Id="9893ffde-e0a6-48d c54ff62738d" /> </Input> <Action Caption="Action1" Id="8f4fff65-12cb e1f60cc48" Type="Rectangle" Color="Green"> <ActionValue> a = a+1 </ActionValue> <Source Caption="Decision1" Id="9893ffde-e0a6-48d c54ff62738d" /> <Target Caption="Decision1" Id="9893ffde-e0a6-48d c54ff62738d" /> </Action> <Decision Caption="Decision1" Id="9893ffde-e0a6-48d c54ff62738d" Type="Diamond" Color="Yellow"> <DecisionValue> a<3 </DecisionValue> <Source Caption="Input1" Id="6a63a117-e b-99d2-18a35ae9c01b" /> <Source Caption="Action1" Id="8f4fff65-12cb e1f60cc48" /> <Target Caption="Action1" Id="8f4fff65-12cb e1f60cc48" Case= False /> <Target Caption="Output1" Id="4ff36dba-aa1e-4b39-b656-a0586b47fe38" Case= True /> </Decision> <Output Caption="Output1" Id="4ff36dba-aa1e-4b39-b656-a0586b47fe38" Type="Parallelogram" Color="Blue"> <OutputValue> a </OutputValue> <Source Caption="Decision1" Id="9893ffde-e0a6-48d c54ff62738d" /> <Target Caption="End1" Id="a3e22c3f-c8fc-47ea-b8ab-705c b" /> </Output> <End Caption="End1" Id="a3e22c3f-c8fc-47ea-b8ab-705c b" Type="Circle" Color="White"> <Source Caption="Output1" Id="4ff36dba-aa1e-4b39-b656-a0586b47fe38" /> </End> </Flowdiagram>

12 3.4 Executable model representation application Our flowchart visualization software is implemented in Adobe Flex. This development framework was chosen because of its focus on GUI creation as well as its multiplatform support. It is accessible from any web browser with Adobe Flash Player plug-in installed. We also use Degrafa framework which allows to easily create scalable vector graphics elements. This software is dedicated to provide visualization and debugging features. This allow students to watch algorithm flow by executing each diagram block element. The software also allows to change the flow of the algorithm as well as block element contents. So, the students can investigate quickly various "what-if" scenarios without leaving visualization software environment. All flowcharts which are uploaded by users reside in a database located on server. Upon request serverside PHP script serves XML code which is converted into ActionScript object and presented in a browser window. We use separate website CMS (Content Management System) component to manage submitted flowcharts. 4 Conclusions and future works Visual programs based on flowcharts are an affective aid in teaching of programming, because students focus on solving the problem, not concentrating on syntax errors. Current flowcharting software does provide considerable functionality for algorithm visualization and visual execution, but offers almost no extendibility of the flowchart model and too little model transformation and run-time execution capabilities. By using Microsoft DSL Tools, we have created an extendable flowchart model and a prototype IDE, which generates customizable XML code. We also applied the generated code for our executable model representation application, implemented in Adobe Flex. In the nearest future we are planning to extend our flowchart model, so that it would be possible to convey more complex algorithms, and write additional model transformation plug-ins to study adaptability of the generated code for various frameworks. We are also planning to conduct a study by using the improved and completed flowcharting environment to teach students the principles of programming and assess the results in the upcoming article. References [1] Term: flowchart. SEVOCAB: Software and Systems Engineering Vocabulary, Retrieved 18 January 2010 [2] Dalgarno M., Fowler M. UML vs. Domain-Specific Languages. Methods & Tools. Summer [3] Eades P.D., Zhang K.. Software Visualization. World Scientific [4] Powers K., Gross P., Cooper S., McNally M., Goldman K.J., Proulx V., Carlisle M. Tools for teaching introductory programming: what works?. Proceedings of the 37th SIGCSE technical symposium on Computer science education, ACM. 2006, p [5] Shabanah S., Chen J.X.. Simplifying algorithm learning using serious games. Proceedings of the 14th Western Canadian Conference on Computing Education, ACM. 2009, Paper session 1B, p [6] Inakage M. Collective creativity: toward a new paradigm for creative culture. Proceedings of the 2nd international conference on Digital interactive media in entertainment and arts, ACM. 2007, p.8. [7] Carlisle M.C. Raptor: a visual programming environment for teaching object-oriented programming. Journal of Computing Sciences in Colleges, Consortium for Computing Sciences in Colleges. April 2009, Volume 24, Issue 4, p [8] Chen S., Morris S.. Iconic Programming for Flowcharts, Java, Turing, etc. ACM SIGCSE Bulletin, ACM. 2005, Volume 37, Issue 3, p [9] Watts T. The SFC editor a graphical tool for algorithm development. Journal of Computing Sciences in Colleges, Consortium for Computing Sciences in Colleges. December 2004, Volume 20, Issue 2, p [10] Overview of Visual Logic. Retrieved 10 January [11] Kurtev I., Bézivin J., Jouault F., Valduriez P. Model-based DSL Frameworks. Companion to the 21st ACM SIGPLAN symposium on Object-oriented programming systems, languages, and applications, ACM. 2006, p [12] Wu H. Automated Generation of Testing Tools for Domain-Specific Languages. Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering, ACM. 2005, p [13] Deursen A., Klint P., Visser J.. Domain-Specific Languages. ACM SIGPLAN Notices, ACM. June 2000, Volume 35, Issue 6, p [14] Ladd D.A., Ramming J.C. Two application languages in software production. USENIX Very High Level Language Symposium Proceedings, USENIX Association. October 1994, p [15] Deursen A., Klint P. Little languages: Little maintenance? Journal of Software Maintenance, CWI. 1997, p

13 [16] Kieburtz R.B., McKinney L., Bell J.M., Hook J., Kotov A., Lewis J., Oliva D.P., Sheard T., Smith I., Walton L. A software engineering experiment in software component generation. Proceedings of the 18th International Conference on Software Engineering ICSE-18, IEEE Computer Society. 1996, p [17] Herndon R.M., Berzins V.A. The realizable benefits of a language prototyping language. IEEE Transactions on Software Engineering, IEEE Press. June 1988, p [18] A., Hayden M., Morrisett G., Eicken T. A language-based approach to protocol construction. DSL 97 First ACM SIGPLAN Workshop on Domain-Specific Languages, in Association with POPL 97, ACM. 1997, p [19] Bruce D.. What makes a good domain-specific language? APOSTLE, and its approach to parallel discrete event simulation. Proceedings of the first ACM SIGPLAN Workshop on Domain-Specific Languages, ACM Press. 1997, p [20] Menon V., Pingali K.. A case for source-level transformations in MATLAB. Proceedings of the 2nd conference on Conference on Domain-Specific Languages, USENIX Association. 1999, Volume 2, p. 5. [21] Sirer E.G., Bershad B.N. Using production grammars in software testing. Proceedings of the 2nd conference on Conference on Domain-Specific Languages, USENIX Association. 1999, Volume 2, p. 1. [22] Domain-Specific Development. Retrieved 16 January [23] Domain Models versus Models. Retrieved 16 January [24] Gray J., Fisher K., Consel C., Levendovszky T., Mernik M., Tolvanen J. Panel: DSLs: the good, the bad, and the ugly. Companion to the 23rd ACM SIGPLAN conference on Object-oriented programming systems languages and applications, ACM. 2008, article no. 9. [25] Silingas D., Vitiunas R., Armonas A., Nemuraite L. Domain-Specific Modeling Environment Based on UML Profiles. Proceedings of Information Technologies , p [26] Domain relationships. Retrieved 16 January [27] Microsoft Acquires Teamprise Assets, Provides Cross-Platform Support for Visual Studio. Retrieved 16 January

14 MODEL-DRIVEN QUANTITATIVE PERFORMANCE ANALYSIS OF UPDM-BASED ENTERPRISE ARCHITECTURE Aurelijus Morkevicius 1, Saulius Gudas 1, 3, Darius Silingas 2 1 Kaunas University of Technology, Faculty of Informatics, Information Systems Department, Studentu a, LT Kaunas, Lithuania, aurelijus.morkevicius@stud.ktu.lt, gudas@soften.ktu.lt 2 Vytautas Magnus University, Faculty of Informatics, Department of Applied Informatics, Vileikos 8-409, LT Kaunas, Lithuania, darius.silingas@gmail.com 3 Vilnius University, Kaunas Faculty of Humanities, Muitines 8, LT Kaunas, Lithuania, gudas@vukhf.lt Abstract. The idea of the enterprise architecture (EA) has been active since 1980-ies. However, enterprise architecture performance attributes analysis still lacks a clear approach and tools for implementing it in practice. The paper presents an approach for the model-driven performance evaluation of the EA models. The suggested approach is based on the Unified profile for MODAF and DoDAF (UPDM), System Modeling Language (SysML) parametric diagram, and a bottom-up performance evaluation algorithm. The support for this method has been implemented in MagicDaw modeling product line. A real world example is presented to validate the suitability of the approach. Keywords: UPDM, SysML, Enterprise Architecture, Performance Analysis, Model-Driven Architecture. 1 Introduction Enterprise Architecture (EA) has been a hot topic since 1980-ies [22]. However, it was not very widely applied in practices due to lack of modeling languages and tools suitable for EA [2]. The EA movement was reinforced with the successful adoption of the Unified Modeling Language (UML) [18] and the Model-Driven Architecture (MDA) [15]. There have been multiple attempts to apply Unified Modeling Language (UML) for Enterprise Architecture (EA) modeling [4], but many modelers found it too complicated and non-natural for solving their domain-specific problems [20]. In 2005, the Unified profile for MODAF and DoDAF (UPDM) initiative has been started in OMG, but the first version of UPDM was released only in 2009, four years later [19]. As soon as the UPDM has been officially released, US Department of Defense mandated UPDM as Information Technology Standard and Profile Registry (DISR) standard. As UPDM is a profile of UML, it has been easily adopted by the majority of UML tool vendors. The versatility of UML and its compatibility with its profiles allows integrating UPDM with the other Object Management Group (OMG) standards based on UML, such as System Modeling Language (SysML), Service Oriented Architecture Modeling Language (SoaML), etc [21]. This enables creating large and versatile EA models, but does not provide a toolkit for analyzing them and making decisions such as choosing between alternative EA solutions or identifying performance problems in an existing EA solution. For achieving these goals, EA modelers need tools for quantitative performance analysis of the EA models from various domains. While OMG is not proposing any methods or tools in addition to modeling language, it is necessary to adopt the methods existing in industry or invent new ones suitable for EA models based on UPDM. The goal of this paper is to present an approach of adopting existing quantitative performance analysis algorithm using SysML Parametric Diagram for evaluating performance values for EA models following UPDM standard, its implementation in MagicDraw modeling product line, and its application to experimental system. The rest of this paper is structured as follows: in section 2, the related works are analyzed; in section 3, the proposed approach for quantitative analysis of EA models based on UPDM is presented; in section 4, experimental evaluation of the proposed approach a small real world EA model is described; in section 5, the achieved results, conclusions, and future work directions are indicated. 2 Quantitative Performance Analysis of Enterprise Architectures Considering EA, we generally believe that quality attributes (such as security, and integrity) of an enterprise system are primarily achieved through EA (same as software architecture [13]). In other words, most of the design decisions within the EA are strongly influenced by the need to achieve quality attributes. In software engineering the aim of analyzing the architecture is to predict the quality of a system before it has been built and not to establish precise estimates but the principal effects of the architecture [1]

15 There is a common misconception that quantitative analysis is too detailed to be performed at the architecture level [3]. Performance engineering practitioners argue that next to functional aspects, non functional aspects of systems should also be taken into account at all stages of the design of a system [14]. Quantitative analysis can serve several purposes. In the first place it is often used to optimization of, for example, processes or systems. Similarly, it can be used to obtain measures to support impact of change analysis. A third application of quantitative analysis is capacity planning, e.g. how many people should fulfill a certain role to finish a process in time [14]. EA models can be quantified in several ways. Measures of interest may include: Performance measures, i.e. response time, utilization, workload; Reliability measures such as availability and dependability; Cost measures. The techniques and example presented in this paper focus on performance measures. 2.1 Related work Not much can be discussed about the related works, because of the novelty of UPDM standard, which first version was released a half-year ago. SysML is relatively new standard also. The approach of using SysML parametric diagrams for enterprise architecture quantitative analysis has never been applied before and none closely relative works has been published. Model driven approach for the evaluation of EA have been suggested by Pontus Johnson, Robert Lagerstrom, Per Narman and Marten Simonsson [11]. Suggested extended influence diagrams for EA quantitative evaluation, uses Bayesian networks. The proposed extended influence diagrams differ from the conventional ones in their ability to cope with definitional uncertainty, i.e. the uncertainty associated with the use of language and in their ability to represent multiple levels of abstraction. Ulrik Franke, Pontus Johnson, Evelina Ericsson, Waldo Rocha Flores, and Kun Zhu propose [6] an improvement and formalization of EA dependency analysis by methods from Fault Tree Analysis (FTA). FTA is a combinatorial model of systems dependability, widely used for safety and reliability evaluations [5]. The method translates the failure behavior of a physical system into events connected by arcs. A visual model portrays the relationships in an accessible way, while a corresponding logical model enables quantitative evaluation. However, this approach phases two major gaps: gap of abstraction and gap of expressive power [6]. Maria-Eugenia Iacob and Henk Jonkers [10] proposes Quantitative analysis approach on Archimate based EA. The approach became as a predefined one for Archimate tools, but in general the evaluation process itself is not model driven. The approach [10] shall be adapted to UPDM based enterprise architecture in fully model driven evaluation manner using SysML parametric diagrams. 3 Model-driven quantitative performance analysis of UPDM-based Enterprise Architecture Standards, techniques and formulas for the quantitative analysis of UPDM based architecture shall be discussed in this chapter. 3.1 SysML and UPDM compliance The Unified Profile for MODAF and DoDAF (UPDM) defines a set of UML and optional SysML stereotypes and model elements and associations. A set of stereotypes extending UML, UML model elements and set of SoaML stereotypes defines the L0 compliance level of UPDM that is mandatory. A set of optional SysML stereotypes for UPDM is called L1 compliance level [19]. Figure 1. UPDM compliance map In the model level L1 compliance results in an UML element with UPDM and SysML stereotypes applied. For example Operational Node UPDM stereotype applies for a class. SysML Block stereotype applies for a class also. By applying these both stereotypes for a class, as the result we get a class that possesses meta

16 properties form both stereotypes applied and in conceptual meaning defines both the Operational Node and the Block. For the rest of the mappings between UPDM and SysML stereotypes [19], see Table 1. Table 1. Mappings between UPDM and SysML stereotypes. SysML stereotype Block Value Type Item Flow Requirement Flow Port UPDM stereotype Capability, Resource Artifact, Capability Configuration, Energy, High Level Operational Concept, Node, Operational Node, Software, System. Climate, Entity Item, Environment, Light Condition,, Location, Measurement Set Commands, Controls, Data Exchange, Energy Exchange, Information Exchange, Materiel Exchange, Organizational Exchange, Resource Interaction Enterprise Goal Node Port, Resource Port 3.2 Applying SysML Parametric Diagram for UPDM based Enterprise Architecture In SysML, Parametric diagrams are used to create systems of equations that can constrain the properties of blocks [17]. Each block may consist of value properties. Parametric diagram connects these values to the constraint block s parameters using binding connectors [12], see Figure 2. Incoming values are constrained within the constraint block and the result is provided to one or more outgoing parameters. The result may be used for further calculations within the same context [7]. In order to apply SysML parametric diagrams to UPDM, UPDM based enterprise architecture should be compliant with SysML (the word compliant means that UPDM element may have SysML stereotype applied, see 3.1 SysML and UPDM compliance section above) [19]. In case the condition is satisfied, block stereotypes shall be applied to the majority of UPDM entities such as nodes, resources and etc. From SysML point of view, UPDM elements that have block stereotypes applied may have value types assigned. Value types in turn may be bound to constraint blocks using the binding connectors. A binding connector is a connector which specifies that the properties at both ends of the connector have equal values [17]. Figure 2. SysML constraint blocks meta-model 3.3 Bottom up performance calculation Quantitative performance analysis technique shall be used to evaluate the effectiveness of the proposed approach. Technique has been suggested by Maria-Eugenia Iacob and Henk Jonkers [9]. In summary, proposed [10] analysis approach consists of the following two phases: a top-down calculation and propagation of the workloads imposed by the top layer; this provides input for a bottom-up calculation that are about to perform on UPDM-based architecture and propagation of performance measures. In this paper we focus on the bottom-up propagation of performance measures. The following recursive expressions apply [9]: 1. The utilization of any resource r is: where dr is the number of internal behavior elements ki to which the resource is assigned. (1)

17 2. The processing time and response time of an internal behavior element a is computed using the following recursive formulas: where d a denotes the in-degree of node a, ki is a parent of a and ra is the resource assigned to a and F is the response time expressed as a function of attributes of a and ra. 3. For example, if we assume that the node can be modeled as an M/M/1 queue [8], this function is: (2) From a given set values and the values calculated using top-down workload calculation, we calculate utilization (U) of a resource, response time (R) and the processing time (T) of a service for each resource and service within the EA scenario. (3) 3.4 The process of Enterprise Architecture scenario performance attributes evaluation As all the techniques required for EA scenario attributes quantitative evaluation have been discussed separately, it requires a clear workflow definition of how to associate these techniques together to achieve the desired results. Figure 3. The process of Enterprise Architecture scenario performance attributes evaluation The process of applying the proposed approach consists of seven steps: 1. Model UPDM based EA scenario. Selected operational or systems view scenario is modeled using UPDM internal structure diagrams such as OV-2, SV-1 or SV Model Constraint Blocks. This step consists of modeling SysML constraint blocks, their parameters and mathematical equations. Constraint blocks are usually modeled using SysML block definition diagram. 3. Add SysML value properties for UPDM entities. At this step UPDM entities to be evaluated are supplemented with value properties. 4. Bind value properties to constraint blocks parameters. Value properties of the UPDM entities should be bound to parameters of the constraint blocks using SysML binding connectors. 5. Instantiate UPDM entities. Instance specifications are created for all the UPDM entities with the empty slots for every value property of the UPDM entity. 6. Add given values. Values for the slots of Instance specifications shall be assigned. 7. Solve parametrics. Simulation of SysML parametric diagram shall be performed to calculate the result values. Summarizing the process of the proposed approach step by step, UML activity diagram is provided in Figure 3 that covers the process of applying SysML parametric diagrams for UPDM based Enterprise Architecture scenario quantitative analysis. 4 Experimental Evaluation Let us define a simple systems view (SV) fragment of UPDM based Enterprise Architecture from [9] in order to demonstrate proposed model driven EA analysis approach. The given EA fragment consists of human resource, systems and services divided into three used configurations called organization, application and infrastructure. These three capability configurations are the internal parts of the workload capability configuration. Organization capability configuration consists of the post role administrator that initiates the scenario by searching damage reports using application s search component. From the service oriented viewpoint the administrator stands for the service receiver and the search component stands for the service provider [16]. The search component requests the query results from the database system and the data base system requests data access from the database server

18 SysML parametric diagram shall be used in the context of Workload capability Configuration, to visualize our sample fragment of EA. Usages of constraint blocks for resource utilization, response time and service provision time shall be added and connected to the resources value properties, see Figure 4. To perform the bottom up performance calculation on the given sample, initial values for the calculation are needed. To calculate utilization (U) for a resource, workload (L) and capacity (C) values are needed. To calculate time (T) for service provision, service execution time (S), multiplicity (n) and response time (R) values are needed. The values shall be taken from the research of Maria-Eugenia Iacob and Henk Jonkers [9]. Figure 4. SysML parametric diagram for UPDM based EA fragment quantitative analysis In order to assign the initial values and mark the values to be calculated, the provided parametric diagram should be instantiated, see Figure 5. That is the next step before the sample can be solved quantitatively. The values that are given are marked as given and those to be calculated are marked as targets. Figure 5. Performance analysis results As you can see in the figure above (Figure 5), the calculation has been successful. All target values have been calculated. Unfortunately the proposed approach allows evaluating only the structural constructs of EA. Current version of SysML does not allow to add value properties to SysML Behaviors [17]. Behavioral especially the Activity diagrams are important to the UPDM models. SysML parametric models that can use the information captured in behavior diagrams would simplify UPDM SysML integration [23]. 5 Conclusions and Future works We have presented an approach how to apply quantitative model-driven performance analysis of EA models based on a new UPDM standard. SysML Parametric Diagram is used to model EA parameters, and

19 bottom-up calculation algorithm is applied for deriving performance values. The proposed approach has been implemented in MagicDraw and has been evaluated on a small illustrative fragment of real world EA model. Based on the experience on implementing and evaluating the proposed approach, we can make the following conclusions: A model-driven approach enables modeling all kind of calculations; The models fragments, such as constraint blocks, can be easily reused many context, which makes it very promising for constructing reusable libraries of EA model elements and evaluating various combinations for the best performance solution; While the proposed approach looks promising for complex EA, it may be over complex and too expensive for a small fragment of EA. The proposed approach allows evaluating only the structural constructs of EA. The proposed approach shall be used as a starting point for the more detailed future works on performing quantitative evaluation of EA models based on UPDM. 6 Acknowledgement The authors would like to thank No Magic, Inc, especially the MagicDraw product team for comprehensive support. References [1] Bass, L., Klein, M., Bachmann, F. Quality Attribute Design Primitives and the Attribute Driven Design Method. 4th International Workshop on Product Family Engineering, [2] Bernard, S. A. An Introduction to Enteprise Architecture. Bloomington, Indiana, USA: AuthorHouse, [3] Ceponiene L, Nemuraite L. Design independent modeling of information systems using UML and OCL. Conference Information: 6th International Baltic Conference on Databases and Information Systems, JUN 06-09, 2004 Riga, Latvia, 2005, Volume: 118, pp [4] Dalgarno, M., & Fowler, M. UML vs. Domain-Specific Languages. Methods and Tools, 2008, vol. 16(2), 2-8. [5] Ericson, C. Fault tree analysis a history. 17th International System Safety Conference, 1999, Orlando, FL, USA. [6] Franke, U., Johnson, P., Ericsson, E., Flores, W. R., & Zhu, K. Enterprise Architecture analysis using Fault Trees and MODAF. CAiSE 2009 Conference Proceedings. Vol-453, psl Amsterdam, The Netherlands: CEUR-WS. [7] Friedenthal, S., Moore, A., & Steiner, R. A Practical Guide to SysML. Burlington, MA, USA: Elsevier [8] Harrison, P. G., & Patel, N. M. Performance Modelling of Communication Networks and Computer Architectures, 1992, Boston, MA: Addison-Wesley Longman Publishing Co., Inc. [9] Iacob, M. E., & Jonkers, H. Quantitative Analysis of Enterprise Architectures. Interoperability of Enterprise Software and Applications (pp ), 2006, London: Springer London. [10] Iacob, M., & Jonkers, H. Quantitative analysis of enterprise architecture. Technical Report ArchiMate D3.5, 2004, Enschede, the Netherlands: Telematica Instituut. [11] Johnson, P., Lagerstrom, R., Narman, P., & Simonsson, M. Extended Influence Diagrams for Enterprise Architecture Analysis. Information Systems Frontiers, 2007, Volume 9 (Numbers 2-3), [12] Johnson, T. A., Paredis, C. J., & Burkhart, R. Integrating Models and Simulations of Continuous Dynamics into SysML. 6th International Modelica Conference, March 3rd-4th, Volume 1, psl Germany. [13] Kazman, R., Abowd, G., Bass, L., Webb, M. Analyzing the Properties of User Interface Software Architectures, Technical Report, 1993, CMU-CS , Carnegie Mellon Univ., School of Computer Science. [14] Lankhorst, M. Enterprise Architecture at Work, 2005, Berlin, Germany: Springer-Verlag. [15] OMG. MDA Guide Version (J. Miller, & J. Mukerji, Mont.) Object Management Group, [16] OMG. Service oriented architecture Modeling Language (SoaML) - Specification for the UML Profile and Metamodel for Services (UPMS), 2008, Needham, MA, USA: Object Management Group. [17] OMG. Systems Modeling Language, 2008, Version 1.1. Needham, MA, USA: Object Management Group. [18] OMG. Unified Modeling Language (OMG UML) Infrastructure, 2007, V Needham, MA, USA: OMG [19] OMG. Unified Profile for the Department of Defense Architecture Framework (DoDAF) and the Ministry of Defence Architecture Framework (MODAF), 2009, Object Management Group. [20] Silingas D., Butleris R. Towards customizing UML tools for enterprise architecture modeling. Information Systems 2009 : proceedings of the IADIS international conference, February, Barcelona, Spain. [21] Silingas D., Butleris R. Towards Implementing a Framework for Modeling Software Requirements in MagicDraw UML. Information Technology And Control, Kaunas, Technologija, 2009, Vol. 38, No. 2, pp [22] Zachman, J.A. A Framework for Information Systems Architecture. IBM Systems Journal, vol. 26, no. 3, pp [23] Zwemer, D. Using ParaMagic and SysML in Combination with UPDM. Atlanta: InterCAX LLC

20 INTEGRATING GUI PROTOTYPING INTO UML TOOLKIT Darius Silingas 1,3, Saulius Pavalkis 1,2, Ruslanas Vitiutinas 1,3, Lina Nemuraite 2 1 No Magic Europe, Savanoriu av. 363, LT Kaunas, Lithuania, darius.silingas@nomagic.com, saulius.pavalkis@nomagic.com, ruslanas.vitiutinas@nomagic.com 2 Kaunas University of Technology, Department of Information Systems, Studentu , LT Kaunas, Lithuania, lina.nemuraite@ktu.lt 3 Vytautas Magnus University, Faculty of Informatics, Vileikos 8-409, LT Kaunas, Lithuania Abstract. This paper introduces an extension of UML for modeling GUI prototypes. It presents the UML profile for GUI modeling, its implementation in MagicDraw, and its application to an experimental system. The profile contains stereotypes for the major GUI components that can be found in classic GUI libraries like Java Swing and several helper stereotypes and enumerations. While UML only allows defining an icon for a stereotype, the proper implementation of this profile requires rendering the symbols of the stereotyped elements as GUI components. This functionality was implemented as a plug-in to MagicDraw tool. The resulting solution enables storyboarding with GUI prototypes and linking their components with the other UML model elements like use cases, data class attributes, and states in GUI navigation state machines. These capabilities are demonstrated with examples from a test assessment system MagicTest, which is used for an experimental approval of linking the proposed profile with familiar software modeling artifacts. Keywords: UML profile, GUI prototyping, storyboarding, model-driven development, MagicDraw. 1 Introduction Unified Modeling Language (UML) is de facto standard in software modeling. Its importance has been amplified by the Model Driven Architecture (MDA) approach, which promotes modeling as the main development means. One of the UML strengths lies in its capability to visualize and relate multiple architectural views for a software system. However, UML lacks capabilities for modeling graphical user interface (GUI), which is an important software architecture view the one that is most visible to the end user. In addition to its role in design, user interface prototyping is also used widely for gathering user requirements. A popular storyboarding technique focuses on capturing the user's actions using the system through a series of user interface snapshots providing representative examples of user inputs and system outputs. Currently, it is most common to capture such screen snapshots with hand-drawn sketches on the whiteboards or in drawings created in general purpose drawing tools like Microsoft Visio. These drawings can only be associated to UML model elements by external hyperlinks as they lack semantic structure and cannot be represented as composite model elements. Storing them separately from the model makes it difficult to connect finer-grained GUI components with model elements and perform traceability, change analysis, validation and other tasks that are nicely supported in modern modeling tools like MagicDraw. There have been multiple attempts to promote modeling GUI with UML, but such approach has been harshly criticized for being unnatural and overcomplicated. This critic is valid as UML 2 defines 248 meta-classes and 13 diagram types. Obviously, GUI modelers need just a small subset of that. They also want to use GUI-specific terminology and properties for model elements, and render the created GUI models in diagrams like real GUI components. The later requirement is critical for understanding the created GUI models and gathering end user feedback. In order to solve these issues, we propose to extend UML with a profile for GUI modeling with a special rendering, which emulates GUI look and feel, and develop a domain-specific modeling environment that makes it easy to model GUI prototypes and hides the complexity of UML from the GUI modelers while maintaining the possibility to integrate GUI models with the other model artifacts. In the rest of the paper we will present the GUI modeling profile, describe its implementation as MagicDraw plug-in, and provide illustrative examples of applying it for a case study system. The rest of the paper is organized as follows: in section 2 the related work is reviewed, in section 3 the proposed GUI prototyping profile is presented, in section 4 an implementation of this profile and prototyping environment is described, in section 5 a demonstration of applying the proposed GUI prototyping profile to an experimental system MagicTest is presented, and section 6 summarizes results and indicates future work

21 2 Related Works Despite the fact that user interface modeling spans almost all phases of Unified Process there is no workflow dedicated for it. In practice, GUI design decisions are of creative nature rather than based on some theory. Furthermore, despite the huge amount of modeling concepts, UML is still not comprehensive enough to be applied for modeling user interface. Therefore, multiple extensions of UML have been proposed for that purpose, e. g. [4], [19]. However, some of these extensions are too abstract for capturing GUI details [4], or too complex for being practical [19] when developing GUI prototypes. Even if it is obvious that GUI prototyping is an important software development activity, there is still no standard profile for GUI development. From the early days of using UML, it was understood that domain modeling and GUI design should occur simultaneously [3], [18], [25], [23], [24], [34]. Use cases are common means for capturing user requirements, e.g. [8]. Naturally, Martinez et al. [23] provide a methodology for validating requirements as early as possible by analyzing GUI combined with use cases; Hennicker and Koch [24] are focusing on precise steps for systematic design of the presentation; in [25], GUI components are obtained from use cases and activity diagrams. Recently, several novel model-driven frameworks for GUI development have been proposed. Software developing communities have accumulated their experience in GUI design in the form of patterns. The most prominent collection of model-driven GUI design patterns together with the overall pattern-driven and modelbased GUI framework is presented by Ahmed and Ashraf in [1] where different kinds of patterns are used as modules for establishing task, dialog, presentation and layout models. Authors present XML/XUL [37] based tool for selecting, adapting and applying patterns to task models for generating GUI prototype, evaluating it and generating the final GUI for Windows XP platform. Inesta et al. [12] propose an adaptation of User Interface Markup Language (UIML) by defining a data model, services model, and a navigation model that allows data communication from one GUI to another. The obtained user interfaces together with Web Services can represent complete applications instead of just being prototypes. User GUI representation and code generation techniques are needed alongside existing model driven methodologies, e. g. [33]. Kapitsaki et al. [14] propose a presentation profile for modeling the GUI presentation of Web applications, the presentation flow and the application navigation properties. It is similar to the profile used by the AndroMDA code generator and uses UML State Machines for modeling distinct states of application objects and dependencies between these states and is thus suitable for service flow modeling. Funk et al. [7] propose to integrate observation functionality into modeldriven GUI development process for collecting user data and testing user interface design. In [27], the modeling and animation of user interface to smart devices is proposed for evaluating GUI usability. As one can conclude from these exponents of existing research, the total requirements for modeling GUI include the capabilities of representing, evaluating and even monitoring the features of GUI, integrated with the overall software product and its development lifecycle while graphical representation namely remains on rather abstract level. A generalization and further adaptation of existing approaches is needed [17]. Apparently, no standard way for GUI modeling exists suitable for usage in universal CASE tools and simultaneously capable for sufficiently representing GUI details. Graphical view specific user interface details usually exist in integrated development environments that lack aforementioned common capabilities for model-driven GUI design. Consequently, the purpose of our work is to integrate the GUI prototyping functionality into a general purpose UML CASE toolkit equipping this target functionality with high fidelity, interactivity and exhaustive specification having in mind the further possibilities to relating a GUI design with the development process and generating its implementation.gui prototyping can be integrated into UML toolkit in a few ways: by integrating with 3 rd party tool, by hard-coding shapes, or by creating Domain Specific Language (DSL) [31] for representing such shapes. Each solution has its benefits and shortcomings. Different works exist in areas related with each of these solutions. The first one is the 3 rd party component integration. The comparison of GUI prototyping solutions [13], [9] showed that there are not many Java Swing stand-alone tools that could provide handy prototyping solution. The two most potential candidates are Ribs [28] and Petra [22]. There are also tools dedicated for Java GUI creation, however, it is too cumbersome to quickly create prototype with such tools. Ribs is good for prototyping as it provides an easy creation of layout, however, some objects as tables and tree elements are not customizable. Petra suggests easy prototyping for Web-oriented design, however, it comes as Eclipse plug-in. Integrating them into UML tool would allow having realistic components for high-middle fidelity prototyping or low fidelity (wireframe) prototyping according to prototyping techniques described in [32], [38]. However, the 3 rd party solution would be hardly extendable or adjustable low compatibility with the toolkit, hard fixing of bugs. The second possibility is based on hard-coding of simple shapes for low fidelity prototyping with DSL consisting of scalable and static icons, text and layout for them as in [6]. Solution is implemented using existing UML tool extension engine. It allows defining images and label positions on shape, initial shape size, etc. However, due to engine limitations, it has no advanced elements such as trees, existing elements lack usability

22 In order to have the high fidelity of GUI and its flexible, standard integration into UML toolkit, DSL with hard-coded, advanced shape support is required. Similar solutions are implemented in [36], [29]. However, solution [36] requires too much detail to be specified, not resulting in better GUI communication. Solution [29] has quite low usability, no editing on screen, no look and feel support, any good integration with other UMLbased domains. Project reload is required for changes to be refreshed across UML and GUI toolkits. In order to fill in this gap in universal CASE tools, we have created a profile for GUI prototyping allowing designing high fidelity, interactive GUI prototypes with detailed specifications. In contrast to many approaches, the profile provides a platform-independent notation familiar to graphics designers. Our GUI prototyping profile is based on UML component and component realization metaclasses that allow reuse of GUI components in multiple screens whereas most of existing solutions are based on class or object [11], [25], or composite structure [23], [15], [38] modeling, so they are limited to a single owner per element. Our solution is agile, method independent, suitable for major software development approaches. Possibilities for connecting GUI prototyping with different software modeling stages are presented in illustrative examples in the section 5. 3 Defining UML Profile for GUI Prototyping Based on OMG meta model architecture, a software architect needs to prepare the model of System Architecture at M1 meta level, which is constructed usign UML as a metamodel (modeling language) at M2 meta level. A significant part of System Architecture is GUI Prototype(s) that cannot be represented nicely with UML itself, and thus need to be based on GUI Modeling Profile extending UML, see Figure 1. Designing this profile and constructing the modeling environment for using it productively is the problem domain of this paper. M2 (Metamodel) UML «use» GUI Modeling Profile «use» System Architecture M1 (User Model) 0..* «use» GUI Prototype Figure 1. GUI prototype modeling problem domain GUI Prototyping Profile extends UML 2 in order to support stereotypes for user interface prototype modeling. The profile is customized for usage in MagicDraw UML tool. This gives user capabilities to model GUI prototypes and render them as actual GUI components. It also enables to integrate GUI prototypes as a particular view of the architecture model and relate them to the other architecture model elements using standard UML relationships and tool-specific hyperlinks. Profile is oriented to software applications GUI prototyping. It supports a minimal set of elements and their properties that are used in most applications for GUI prototyping needs. It is independent from a specific GUI look and feel and does not cover details necessary for actual GUI development and layout of component. All user-defined GUI prototype components will be represented by underlying UML model elements with stereotypes from GUI Prototyping profile. GUI element-specific properties are taken from corresponding stereotype tags. Profile defines stereotypes for simple and more advanced customizable GUI, e.g. button, label, tabbed pane, table, tree, etc. All stereotypes defined by the profile are grouped into packages based on GUI element type containers, buttons, text, and other, see Figure 2. GUI Prototyping profile is based on UML 2 components, classes, and components realizations [20]. Stereotypes for composite GUI elements, e.g. Frame, Panel, GroupBox, extend Component metaclass. Stereotypes for atomic GUI components, e.g. TextField, ScrollBar, Node, extend Class metaclass. A standard UML ComponentRealization relationship is used to represent composite GUI element realization by its component GUI elements. When modeling GUI prototypes, ComponentRealization relationships should be created automatically. Using component realizations enables to reuse the same GUI component in multiple composite GUI components. Stereotypes hierarchy in the profile is designed to enable reuse of the tag definitions in multiple GUI element types. General properties reused in multiple GUI components are grouped to abstract Stereotypes of GUI Prototyping profile. Figure 3 presents TitledComponent hierarchy design as an example

23 containers TabbedPane Panel GroupBox ScrollPane Frame ToolBar buttons RadioButton Button CheckBox «profile» GUI Prototyping text Label TextField PasswordField Hyperlink TextArea List ComboBox Spinner other Separator ScrollBar Slider ProgressBar Node Row Leaf Column Cell MenuBar Table Tree Figure 2. GUI Prototyping profile structure Simple Line Raised Etched Lowered Etched Raised Bevel Lowered Bevel «enumeration» BorderStyle «stereotype» TitledComponent [Element] title : String [1] Border Type 1 «stereotype» GroupBox [Component] titlevisible : Boolean [1] = true «stereotype» Frame [Component] minimize : Boolean [1] = true maximize : Boolean [1] = true «stereotype» Row [Class] «stereotype» Column [Class] Figure 3. GUI stereotype design example (excerpt) 4 Implementing GUI Prototyping Support in MagicDraw UML Implementation of GUI prototyping using stereotyped standard UML elements in modeling tools requires overriding default graphical model elements representation by typical GUI component view. It also requires updating graphical GUI component view due to model element or diagram symbol properties changes. Additionally, it is recommended to define a custom diagram for GUI prototyping. MagicDraw enables implementing these requirements using DSL engine (for creating custom GUI modeling diagram) and Open API (for developing a plug-in for custom rendering of GUI component symbols). MagicDraw clearly separates model element data specification and visualization of model elements as symbols in diagrams according to the Reference Model Pattern [10]. MagicDraw model elements data properties are encapsulated in classes implementing OMG UML 2.2 metamodel following the principles of Java Metadata Interface [5]. MagicDraw Open API provides ability to override standard UML element symbols representation in diagrams using custom renderers. Renderer is a common pattern used for separating visual components from their drawing algorithms, allowing dynamic determination of visual appearances [10]. MagicDraw Open API defines ShapeRenderer class and PresentationElementRendererProvider interface as a Renderer pattern for custom symbol rendering implementation. Developers may define custom symbol rendering by overriding ShapeRenderer operation draw and implementing PresentationElementRendererProvider interface for specific symbol drawing. Interface PresentationElementRendererProvider implementation should be registered in MagicDraw using PresentationElementRendererManager operation addprovider. A universal custom renderer and renderer provider has been implemented for all GUI Prototyping profile stereotypes, see Figure 4. Java Swing library was chosen as a provider for GUI component symbol rendering. GUI component symbol drawing is delegated to the corresponding Java Swing component paint operation. Determining which component should be drawn is implemented using a modified Visitor pattern [21]. JComponentRenderer uses JComponentHandlerAcceptor for performing appropriate logic using JComponentHandler interface realizations for specific GUI symbols needs

24 PresentationElementRenderer PresentationElementRendererProvider +getrenderer( pelement : PresentationElement ) : PresentationElementRenderer ShapeRenderer UIModelingRendererProvider +getrenderer( pelement : PresentationElement ) : PresentationElementRenderer returns instance of JComponentRenderer +draw( g : Graphics, presentationelement : PresentationElement ) : void... JComponentHandlerAcceptor +accepthandler( pelement : PresentationElement, handler : JComponentHandler ) +accepthandler( element : NamedElement, handler : JComponentHandler ) JComponentHandler +handleframe() : Object +handlepanel() : Object +handlescrollbar() : Object +handletextfield() : Object +handlebutton() : Object +handlelabel() : Object +handletable() : Object +handletree() : Object... Creates specific JComponent instance for specific GUI modeling element JComponentFactory JComponentUpdater JComponentColorFontHandler JComponentSizeHandler Figure 4. Major MagicDraw plug-in classes for custom rendering of GUI component symbols GUI symbols rendering depends on the model elements properties, thus it must be updated whenever elements properties change. MagicDraw provides model changes notification mechanism using Listener pattern [16]. Listeners for handling relevant updates of GUI component model element data have been implemented. In addition to the plug-in for custom rendering, a custom GUI Prototyping diagram was created based on the GUI Prototyping profile using a typical workflow for creating domain-specific modeling environment as presented in [31]. A snapshot of this environment is presented in Figure 5. Figure 5. MagicDraw environment customized for modeling GUI prototypes Due to UML model based GUI prototyping implementation, the standard MagicDraw features and UML constraints apply for GUI models, e.g. nesting, editing on the diagram, symbol properties support, model data specification dialogs, validation, report generation, export as image, etc

25 5 Applying GUI Prototyping to Experimental System GUI prototyping can be used in various project development stages. We will discuss the most common uses of GUI prototyping in the context of modeling, and give several examples from the modeling of an experimental system MagicTest [30], which automates student evaluation by automated test assessments. Starting with requirements analysis, one typically defines use cases for the system under design. The end users will invoke the uses cases through GUI, thus a better understand of end user requirements can be achieved by building GUI prototypes driven by use cases. For storing this information in the model, we recommend to relate one or more GUI prototype model elements to each use case model element using a standard UML Realization relationship, see Figure 6. It is also advised to keep uses cases and GUI prototypes in separate packages as the former one is a part of an abstract platform-independent model (PIM), and the latter one is a part of a more concrete platform-specific model (PSM). Transition from abstract presentation model to more concrete one by refinement is analyzed in [23]. In MagicDraw, a modeler can assign hyperlinks to GUI prototype buttons or any other model element. This enables navigation through prototypes for simulating an abstract use case realization. A behavior of complex use case is typically specified using activity models. GUI prototypes can be hyperlinked on activity actions for illustrating GUI invocation in a particular use case scenario. Activity and GUI Elements relation is analyzed in [2], [23], [34]. Theoretically, it is possible to implement a model to text transformation, which transforms the model into executable prototype realization by generating UIML [35], XUL [37] or another user interface specification language scripts. For completeness of use case coverage in GUI prototypes, it is easy to set up an automated relationship matrix, which is a standard feature of modern UML tools like MagicDraw. The analysis of use case and GUI prototype relationships is very common in practice and research [2], [3], [18]. Figure 6. Fragments of use case model, screen prototypes, and relationship matrix One of the first tasks for shifting from requirements analysis to design is identifying the major components in all system layers. Robustness analysis is a common technique for identifying user interface components (called boundaries), services (called controls), and data structures (called entities), which is supported in most UML tools [30]. A modeler can obviously link each boundary element to one or more GUI prototypes. However, this gives just a fragmented view of each GUI element as a separate component. For the overall understanding of user interface navigation it is recommend to define a state machine representing transitioning between screens possibilities for a particular actor [30]. In such a state machine, each state represents using a particular GUI component, which makes it a natural candidate for assigning hyperlinks for navigating to one or more GUI prototypes

26 Figure 7. GUI navigation schema with hyperlinks to multiple GUI prototypes It is necessary to make sure that the information that needs to be shown in GUI components is available in data structure. When a detailed data structure is modeled, it is very useful to relate fields of a specific GUI prototype with properties of data classes, see Figure 8. This may help to identify inconsistencies between user interface and data layers and change one or the other accordingly. These relationships are later very useful for traceability and change impact analysis purposes. Figure 8. Mapping of user interface component fields to data class properties Table 1 consolidates the potential uses of GUI prototyping in a typical modeling project following the modeling process presented in [30]. Table 2. Applying GUI prototyping in a modeling project Project stage Standard UML model elements Usage of GUI prototyping Requirements Use cases GUI prototypes are modeled to better understand the requirements based on use cases Requirements Use Case Linking GUI prototypes to actions enables Architectural design activities Robustness analysis model easy simulation of use case scenarios Linking GUI prototypes to boundary classes enables easy visualization of their Relationships to standard UML model elements Realization relationship from GUI prototype to use case Navigable hyperlink from action to GUI prototype Navigable hyperlink from boundary class to GUI

27 Project stage Detailed design Detailed design Standard UML model elements State Machine for GUI navigation Data classes Usage of GUI prototyping structure Linking GUI prototypes to states enables easy simulation of navigation scenarios Linking GUI prototype fields to data class properties assures consistency between GUI and data layers and enables traceability Relationships to standard UML model elements prototype Hyperlink represented on diagram. Usage relationship from GUI component field to data class property By connecting a number of GUI prototypes, linking them via hyperlinks on buttons, and transforming the model into browsable report [31], it is possible to present a story of using the application. Such GUI prototyping approach is called story boarding [11], [26], [38] and is quite popular in practice. 6 Summary and Future Work We have defined a UML profile, which contains stereotypes for modeling major GUI elements, and implemented MagicDraw plug-in, which renders the symbols of the stereotyped GUI model elements as Java Swing components. The plug-in also includes domain-specific customizations for the stereotypes and a custom GUI Modeling diagram that make it easier and more productive to apply the profile. We have validated the suitability of this extension by modeling GUI interface for a case study system MagicTest and connecting it to the other model elements, which integrates the GUI prototypes into the overall system architecture model. The plug-in has been released as a part of a commercial MagicDraw product. It has already been taken in use by many MagicDraw tool users who appreciated it and provided multiple suggestions for improvements. In the future, we are planning to re-factor the profile for a better integration with the composite structure concepts, and enable switching between different look and feel libraries independent from MagicDraw look and feel that would enable the emulation of sketching and webpage design. We are also considering: Conversion to working realization, by generating XUL [37] or other user interface description language and aligning GUI Prototyping profile capabilities with XUL; Improving usability of relating various model elements with GUI prototype elements; Defining validation suites [29] for completeness and style conformance of GUI prototype models; Supporting internationalization of GUI prototype textual properties; Providing predefined but extendible libraries of reusable GUI prototype components; Defining open API for enabling MagicDraw users to customize and extend GUI prototyping. The popularity of the GUI prototyping plug-in and multiple directions for improvements prove the importance of having GUI prototyping as a component in the UML toolkit. We will continue our scientific and practical research and development activities targeted at making it even more useful for the modelers. 7 Acknowledgements We would like to thank Dennis Pingel, who has built a prototype of the MagicDraw plug-in for GUI prototyping based on the provided requirements during his internship at No Magic Europe, and Mindaugas Genutis, who supervised Dennis and later evolved the prototype into a high-quality product. References [1] Ahmed, S., Ashraf, G. Model-based user interface engineering with design patterns. Journal of Systems and Software, Vol. 80(8), August 2007, [2] Almendros-Jimenez J., Iribarne L. Designing GUI components from UML Use Cases. In Proc. 12th Int. Conf. and Workshop on the Engineering of Computer Based Systems, 2005, [3] Blankenhorn K., Jeckle, M. A UML Profile for GUI Layout. In Object-Oriented and Internet-Based Technologies, LNCS, Vol. 3263, 2004, [4] Conallen, J. Building Web Applications with UML. Pearson education, [5] Dirckze R. Java Metadata Interface(JMI) Specification. Java Community Process, Version 1.0, 2002 [6] Enterprise architect: UI Modeling extention, [7] Funk, M., Hoyer, P., Lin, S. Model-driven Instrumentation of Graphical User Interfaces. In Second International Conferences on Advances in Computer-Human Interactions, 2009,

28 [8] Gudas, S., Lopata, A. Meta-model based development of use case model for business function. Information Technology and Control, Vol. 36(3), 2007, [9] GUI prototyping tools, [10] Heer J., Agrawala, M. Software Design Patterns for Information Visualization. IEEE Transactions on Visualization and Computer Graphics 12 (5): [11] Hennicker, R., Koch, N. Modeling the User Interface of Web Applications with UML. Workshop of the puml- Group held together with the «UML»2001 on Practical UML-Based Rigorous Development Methods - Countering or Integrating the extremists, October 01, 2001, [12] Inesta, L., Aquino, N., Sánchez, J. Framework and authoring tool for an extension of the UIML language. In Advances in Engineering Software: Designing, modelling and implementing interactive systems. Vol. 40 (12), December 2009, [13] Java GUI Builders: Java Swing GUI builders review, html. [14] Kapitsaki, G. M., Kateros, D. A., Prezerakos, G. N., Venieris, I. S. Model-driven development of composite context-aware web applications. Information and Software Technology, Vol. 51, 2009, [15] Koch, N., Baumeister, H., Mandel, L. Extending UML to Model Navigation and Presentation in Web Applications. In Modeling Web Applications, Workshop of theuml Ed. Geri Winters and Jason Winters, York, England, October, [16] Landay, J. A., Borriello, G. Design Patterns for Ubiquitous Computing. Computer, Vol. 36(8), 2003, [17] Link, S., Schuster, T., Hoyer, P., Abeck, S. Focusing Graphical User Interfaces in Model-Driven Software Development. In Proceedings of First International Conference on Advances in Computer-Human Interaction, 2008, 3 8. [18] Martinez, A., Estrada, H., Sánchez, J. From Early Requirements to User Interface Prototyping: A methodological approach. 17th IEEE International Conference on Automated Software Engineering 2002, Edinburgh, UK, September 23 27, [19] Moreno, N., Fraternali, P., Vallecillo, A. WebML modelling in UML. Software, IET, Vol. 1(3), June 2007, [20] OMG Unified Modeling Language (OMG UML), V2.2, OMG, [21] Palsberg J., Jay C.B. The Essence of the Visitor Pattern. Compsac, pp.9, 22nd International Computer Software and Application Conference, 1998 [22] Petra, [23] Pinheiro da Silva, P., Paton N. W. UMLi: the Unified Modeling Language for Interactive Applications. In UML The Unified Modeling Language. Advancing the standard. Third International Conference, Springer, 2000, [24] Pinheiro da Silva, P., Paton N. W. User Interface Modeling with UML. In Proceedings of the 10th European- Japanese Conference on Information Modelling and Knowledge Representation, Saariselk a, Finland, IOS Press, May [25] Pinheiro da Silva, P., Paton, N. W. A UML-Based Design Environment for Interactive Applications. In Proceedings of UIDIS'01, Zurich, Switzerland, IEEE Computer Society, May 2001, [26] Preece, J., Rogers, H., Benyon, D., Holland, S., Carey, T. Human-Computer Interaction. Addison Wesley, [27] Propp, S., Buchholz, G., Forbrig, P. Task Model-Based Usability Evaluation for Smart Environments. Engineering Interactive Systems, LNCS, Vol. 5247, 2008, [28] Ribs: ReportMill's Interface Builder for Swing, [29] Screen Architect, [30] Silingas, D., Butleris, R. Towards Implementing a Framework for Modeling Software Requirements in MagicDraw UML. Information Technology and Control, vol. 38(2), 2009, [31] Silingas, D., Vitiutinas, R., Armonas, A., Nemuraite, L. Domain-Specific Modeling Environment Based on UML Profiles. In Information Technologies' 2009: Proceedings of the 15th International Conference on Information and Software Technologies, IT 2009, Kaunas, Lithuania, April 23-24, 2009, Kaunas University of Technology, Kaunas, Technologija, 2009, [32] Simon, J. Interaction Happens: Prototyping Techniques. prototyping.html, [33] Skersys, T. Business knowledge-based generation of the system class model. Information technology and control, Vol. 37(2), 2008, [34] Tick, J. Software User Interface Modeling with UML Support. Institute of Software Engineering, IEEE, 2005, [35] User Interface Markup Language (UIML) Version 4.0. Committee Draft. OASIS, 23 January [36] Visual Paradigm: User Interface Designer, [37] XUL: The XML User Interface Language, [38] Zhou, J., Stålhane, T. A Framework for Early Robustness Assessment. In Software Engineering and Applications (SEA' 04), MIT Cambridge, MA, USA, 2004,

29 THE GUI TESTING METHOD BASED ON TESTING META-MODEL Andrej Usaniov, Sarunas Packevicius, Kestutis Motiejunas Kaunas University of Technology, Department of Software Engineering, Studentu 50, Kaunas, Lithuania, Abstract. Software testing consumes more that 50 % of all software development resources. Today the majority of software has a graphical user interface. The most popular way to test software functions is to test them through a user interface. Testing can be easily dropped out and increasing likelihood of producing a faulty software by many times [1]. To reduce testing costs a testing automation is employed. The GUI testing method based on testing meta-model, experiments execution and results evaluation are presented in this article. Keywords: GUI testing method, tests meta-model. 1 Introduction Today the majority of software has graphic interface. Users could access software functions through graphical user interface. Of course the most popular way to test software is to test it through its user interface [2]. This is usually references as GUI (Graphical User interface) testing. During GUI testing software is verified through its interface. A tester enters some input data into software windows and checks if produced result is correct. This process is manual and very labour intensive. In practice testing process is often associated with hard time and budget constraints, lightly documented requirements, misunderstandings of testing objectives, and inaccurate evaluation of testing scope. The user interface tests are usually documented in some form as steps in some semi formal text documents. Later texts documents are read by testers and described test cases are executed manually. The test outcome is evaluated manually by tester as well. In order to reduce testing costs tests automation is employed [3-7]. The way to reduce testing costs and to allow performing more extensive software testing is to automate software testing process [8]. The automation includes automatic test case preparation, tests execution and result verification. This allows testing software more extensively, thus finding more bugs and increasing its quality while reducing testing costs. Ryser and Glinzhas proposed a GUI testing method by formally describing software usage scenarios and providing instructions on how to manually perform testing using defined scenarios [9]. The drawback of this approach is that all testing has to be performed manually, the method only provides best practices how to develop testing scenarios and perform tests execution. Jesus and Luis presented method of developing graphical user interfaces from two UML models: use case and activity diagrams. They defined some rules of transformation of specifications into the user interface. Firstly user interface components are represented into an UML class diagram. Then class diagram is used for generating code fragments which can be considered as GUI prototypes [10]. Memon proposed GUI testing method is based on system s GUI behaviour graph representation [11], where nodes are the states of GUI and transitions are events. To specify graph during manual GUI examination the basic steps of the method are: Identifying Components and Events, Creating Event-flow Graphs, Computing Event-sequences. Thus this approach is prone to graph incompleteness. The drawback of this method is that it is a labour-intensive. Currently one of the most popular GUI testing techniques is a record-playback automation method [9]. During first run testing is executed manually. Another drawback of record-playback tools is high dependency on a platform. For instance often tools support multiple platforms, but its scripts recorded on the one platform are invalid for execution on another one. Also a script recorded with a one tool is incompatible for another tool [2]. Some authors are proposing an easier ways to create automated user interface tests then using record playback tools [2, 12] or automatically generating user interface tests [13, 14]. The majority of existing GUI testing methods orientates on GUI graphical representation preparation methodology. Often only providing instructions how to prepare tests manually, but doesn t provide means on how to get test cases. Also test cases should be executed and results evaluated manually. Some methods define how to automate only a part of test process. Other testing methods describe how to execute manually created tests automatically. To overcome the manual tests creation problem the tests generation approach based on tests meta-model creation is proposed in this paper

30 2 The GUI testing method The GUI testing method is based on tests meta-model. The structure of proposed method consists of those parts: 1. Tests meta-model creation; 2. Generation/selection of testing scenarios sets using tester s selected graph traversing algorithm: all paths, main paths, all nodes; 3. Generation of executable testing scripts from testing scenarios; 4. Execution of testing scripts on SUT and code coverage measurement; 5. Selection of testing scripts from generated testing scripts sets based on code coverage measured during execution; The structure of the GUI testing method is presented below (Figure 1). Tests meta-model is the graphical representation of testing process from tester s perspective. The UML 2.0 activity diagram is used. Each activity diagram is the meta-model of some SUT function the tester intents to test. The way how the tester s actions are depicted corresponds to tester s understanding of system behaviour. The activities in the diagram correspond to the steps of the test. Also activities are mapped to certain graphical components of SUT and describe events on them. Data pins are used for input data passing and output data receiving. Finally all activity diagrams are used to create master diagram by using calls to functions diagrams the tests meta-model of SUT. Test meta-model creation is the only manually performed step of proposed GUI testing method. Minimize sets remaining code coverage Figure 1. The structure of the GUI testing method Depending on testing goals and project constraints different graph traversing algorithms might be used for generation of initial testing scenarios sets. The testing scenarios set generated using all paths traversal algorithms, hereafter all paths set, stands for all independent paths within system under test (SUT). Thus execution of all generated scenarios requires the biggest amount of time. The testing scenarios set generated using all nodes traversal algorithm, hereafter all nodes set, ensures that all defined tester s action within testing meta-model will be reached at least once. This is a subset of all paths set with smaller number of testing scenarios and requires less execution time. The testing scenarios set generated using main path traversal algorithm, hereafter main paths set, consists only of minimal number of testing scenarios witch covers only the main aspects of software functionality. Generated testing scenarios sets are converted into executable testing scripts. They may be expressed in different programming languages. Prepared testing scripts are being automatically executed on SUT providing code coverage feedback for subsequent testing scenarios sets selection step. 3 Example To demonstrate the GUI testing method the automated teller machine (ATM) is used. Below is given an example of ATM pin code entering functionality tests meta-model (Figure 2)

31 Figure 2. ATM testing meta-model part for Enter Pin Code functionality The diagram consists of eight (8) nodes and ten (10) edges with five (5) paths. The numbers of paths or testing scenarios for pin code entering functionality generated using different traversal algorithms from testing meta-model are presented in the table below, when cycles within diagram are limited to one (Table 1). Table 1 The numbers of testing scenarios for pin code entering functionality All paths All nodes Main paths Number of testing scenarios Tests meta-model enables generation of testing scenarios sets. Tests meta-model is directed graph which is being parsed using different traverse algorithms: all nodes, main paths and all paths. Example of generated testing scenario is given below (Listing 1). Listing 1. Testing scenario for Pin code entering functionality EnterPinCodeActivityTest1( InitialNode DecisionNode EnterPinCode DecisionNode Ok ActivityFinalNode ) By supplementing testing scenarios with testing data executable testing scripts are being generated. Example of generated executable testing script is given below (Listing 2). Listing 2. Executable testing script for Pin code entering functionality function EnterPinCodeActivityTest1(pinCode, result){ PinCode=pinCode; Sys.Process("ATM").ATM.EnterPin.enterPinTextBox.Text = PinCode; Sys.Process("ATM").ATM.EnterPin.enterPinOkButton.ClickButton(); result[0]="ok"; } The number of testing scripts depends on the size of given testing data applied for selected testing scenarios. Numbers of generated testing scenarios and testing scripts for ATM tests meta-model are given below (Table 2). Table 2. The numbers of testing scenarios for ATM All paths All nodes Main paths Testing scenarios set size Testing scripts set size Testing scripts execution duration

32 During execution of testing scripts the code coverage of the ATM is measured. This allows evaluating effectiveness of created tests meta-model. The code coverage values reached by testing scenarios sets generated using different traversal algorithms are presented below (Figure 3) Coverage, % ,44 83,42 83,62 96, ,91 91, ,5 72,65 60 all paths all nodes main paths symbol branch method Figure 3. The code coverage using different traverse algorithm None of generated scripts sets was able to reach 100% code coverage. The reason for that is inability to measure code coverage within classes destructors on program exit. Main path set requires about one (1) minute of execution and ensures symbol code coverage at 91.49% level. Execution time increases almost forty (40) times then all path set is used. This allows increasing symbol code coverage up to 97.44%. Using code coverage measurement generated testing scripts sets could be reduced by subsequently selecting only those scripts which have influence on the code coverage increase. Finally subset, hereafter selected set, having significantly reduced number of scripts provides the same code coverage as generated set (Table 3). Table 3. Reduction of testing scripts set Testing scripts sets size generated selected Times decreased Selected set size, % Main path ,03 49,18 All nodes ,41 18,5 All paths ,22 13,85 The main path selected set having 30 scripts provides symbol code coverage of 91.49%. The all paths selected set having 329 scripts is eleven (11) times bigger and gives only 5.95% coverage increase. The main paths set is best suited for projects with tight time constraints. With relatively low execution duration of about 1 minute it provides acceptable, higher than 90%, code coverage. The all paths set gives only a slight code coverage increase, up to 6 %. The all nodes set reaches code coverage level close to all paths set results, but execution duration is fivefold (5) lower. A generated and selected testing scenarios sets where evaluated using mutation testing. The ATM application was modified to create mutated versions so called mutants. The modifications are called mutation operators or mutations. Typical mutation operators alike mimic programming errors such as using wrong operator or variable name: replace each operand by every other syntactically legal operand, or modify expressions by replacing operators and inserting new operators, or delete entire statements. The size of the mutation operators set is determined by the language of the program being tested and the mutation system used[15]. Below are given result for mutation testing (Table 4). Table 4 Mutation testing results All paths All nodes Main paths Mutants killed, % ,59 82,

33 Depending on testing goals tester can choose main path, all nodes, or all paths approaches: The main path set kills 82.75% of mutants with relatively low execution duration of about 1 minute; The all nodes set gives 9.84% increase of killed mutant count and execution duration takes almost 8 minutes; The all paths set kills all mutants but requires relatively high execution duration up to 40 minutes; The example shows that proposed method allows choosing suitable testing strategy: the trade-off between testing duration and test quality. 4 Conclusions The GUI testing method based on tests meta-model was presented in this article. The proposed method allows to decrease the required manual effort for test creation, simplifies tests maintenance when changes take place, helps to clarify testing scope and objectives, allows to generate tests automatically, and enables regression testing. The proposed GUI testing method allows making the trade-off between testing duration and test quality. 100% of mutants were killed using all paths set. This is 20% greater than main path set s result, but execution duration was 40 times longer. While code coverage of main paths set is 91.49% which is lower by 5.95% than all paths set. References [1] Leon, O., Strategic directions in software quality. ACM Comput. Surv., (4): p [2] Kanglin Li, M.W., Effective GUI Test Automation: Developing an Automated GUI Testing Tool [3] Corno, F., et al., Automatic test program generation: a case study. Design & Test of Computers, IEEE, (2): p [4] Knowles, R., Automatic testing: systems and applications. 1976, London ; New York: McGraw-Hill [5] Tracey, N., et al., Automated test-data generation for exception conditions. Software: Practice and Experience, (1): p [6] Wee Kheng, L., K. Siau Cheng, and S. Yi. Automated generation of test programs from closed specifications of classes and test cases [7] Xin, W., C. Zhi, and L. Qi Shuhao. An optimized method for automatic test oracle generation from real-time specification. in 10th IEEE International Conference on Engineering of Complex Computer Systems (ICECCS'05) [8] Elfriede, D., Effective Software Testing: 50 Ways to Improve Your Software Testing. 2002: Addison-Wesley Longman Publishing Co., Inc [9] Ryser, J. and M. Glinz. A Scenario-Based Approach to Validating and Testing Software Systems Using Statecharts. in 12th International Conference on Software and Systems Engineering and their Applications ICSSEA' Paris, France. [10] Jesus, M.A.-J. and I. Luis, Designing GUI Components for UML Use Cases, in Proceedings of the 12th IEEE International Conference and Workshops on Engineering of Computer-Based Systems. 2005, IEEE Computer Society. [11] Memon, A.M., A Comprehensive Framework For Testing Graphical User Interfaces, in Faculty Of Arts And Sciences. 2001, University of Pittsburgh: Pittsburgh. p [12] Meszaros, G., Agile regression testing using record \& playback, in Companion of the 18th annual ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications. 2003, ACM: Anaheim, CA, USA. [13] Atif Memon, A.N.Q.X., Automating regression testing for evolving GUI software. Journal of Software Maintenance and Evolution: Research and Practice, (1): p [14] Memon, A., et al. DART: a framework for regression testing "nightly/daily builds" of GUI applications. in Software Maintenance, ICSM Proceedings. International Conference on [15] Roger, T.A., et al., Mutation of Java Objects, in Proceedings of the 13th International Symposium on Software Reliability Engineering. 2002, IEEE Computer Society

34 AUTOMATIC DETECTION OF POSSIBLE REFACTORINGS Stasys Peldzius Vilnius University, Faculty of Mathematics and Informatics, Naugarduko str. 24, LT Vilnius, Lithuania, Abstract. In the continual evolution of software applications should be ensured that these are high quality designed and programmed. But inevitably appear the defective code that call code smell. It is therefore important to be able to find such problems, and to correct it. Code refactoring is a process which solves code smell and to improve maintainability, performance, or to improve extensibility. The aim of this paper is to investigate and propose automatic universal refactoring tool, which is detected in self- code smell and is independent of software systems programming languages. This tool uses a logic programming, which is a factual description of the conversion, and the rules the refactoring of the program. It also aims to create a practical benefit of the automatic adjustments to the proposed tool is to be realized, and a demonstration of refactoring operation. Keywords: refactoring, logic programming, code smell, software evolution. 1 Introduction The development of the software applications or their subsequent improvement very often result in the reduction of their quality, creation of over-large classes, too long methods [6], and the class hierarchy fails to meet the requirements, so programmers have to solve these problems by resorting to the refactoring process. The refactoring is the process of changing application s internal structure without modifying its existing functionality [5]. Refactoring improve the readability and enable easier extensibility of the programs. The refactoring is used by all the programmers, but usually they are performed manually, thus the programmers themselves have to detect the location of the bad smells (any symptom in the source code of a program that possibly indicates a deeper problem) and to perform an appropriate refactoring. Usually it is a very tedious task that could become much easier having a tool that would help identify the places that need to be refactored. Although such tools have already been developed, they can refactor only those places of the program that are identified by the programmer, and can only perform simple refactoring, such as detecting declared but unused variables. However, the programmer usually faces a contrary problem the tedious task is not to perform a refactoring but to detect a bad smell and there are no tools developed for such a purpose. The major advances have been made in the application of logic programming for solving this problem [16]. This paper offers a model for creating a program that could perform refactoring automatically and serve as a universal refactoring tool which could be used by programmers in practice; the paper also presents detailed examples of its usage. 2 Refactoring Tools There is a diversity of refactoring tools available, but in practice they usually do not meet any of these requirements and can only detect the places that are in need of simple refactoring, such as declared but unused variables, and do not locate the places that need a more complex refactoring. Another disadvantage of such programs is that the user is either not able to develop new refactoring for these tools or it is an overly complex task which requires specific knowledge and comprehension of the tool structure; in addition, they are developed for the programs that support a specific programming language and rewriting it for another language can be complex and expensive. While scientific studies [3, 4, 7, 10, 11, 12, 14, and 15] present various ways for refactoring automation and offer theoretical models for performing wider range of refactorings, such as [1] duplicate code detection using anti-unification, it is difficult to apply them in practice since each new refactoring requires a creation of a complex model. Present [17] three distinct steps in the refactoring process: 1) detect when an application should be refactored, 2) identify which refactoring(s) should be applied and 3) where (automatically) perform these refactorings. The first step depends on the programmer and the second and third can be automated in the tool. The model of the tool presented in this paper must meet additional requirements which are useful for the programmers: 1. Automatic detection of possible refactorings. It is the most important requirement since the programmer does not have to look for the places that do not satisfy the requirements, so this tool is expected to detect the parts which need to be refactored. 2. A possibility to expand the number of refactorings in the tool in a simple way. These are important refactorings which occur during the development of specific systems or performing it in a specific way, i. e. each

35 team of the programmers may need to define their own refactoring instead of hoping that that the suppliers of the tool will update the refactorings supported by the tool. 3. Easy adaptation of the tool to various programming languages; it is required that the refactoring written for one language could be easily modified for another language. The logic meta-programming is used to detect the places that are in need of refactoring since this language can be used in writing declarative programs which describe the examined programs and manipulate them. The practical application of this program is demonstrated in various scientific studies [2, 16, 17, 18, and 19] which include experiments with the Smalltalk and Java languages; however, they do not fulfill the stated goals. This paper aims at expanding their ideas and presents a model of the universal tool which serves the 3 goals defined above. The presented tool is related only to the logic programming and can be integrated with any language in which the analyzed programs are written. Furthermore, the paper analyzes the realization possibilities of such a tool and presents examples that examine the solutions of several important problems which reduce the quality of the programs. 3 Language of Refactoring The language of refactoring is not only used to write the algorithms for the detection of the possible refactorings, but also to define the refactored program since the language-independent refactorings can only be implemented by having an intermediate form where programs supporting different languages could be entered. If this intermediate language is uniform for all languages, it can be called a universal language. In this case the refactorings will be searched not in the primary application, but in the intermediate form which defines the refactorings data. This language of refactoring will be used for defining the refactorings data, writing refactoring program and getting the results. The Prolog logic programming language was selected as a refactoring language. 3.1 Refactorings Data The refactorings data is the information about the analyzed program, for example, the names of the classes, methods, relationships between them etc. The refactorings data are obtained by transforming refactored programs, acquiring the information necessary for performing a desired refactoring. The refactorings data are defined by the Prolog facts. Theoretically, it is possible to rewrite the entire program by facts as shown in Table 1. However, depending on the refactoring, certain data have to be specified. Table 1. Rewriting the program by facts [18] Object oriented program class Stack { int pos = 0; public Object peek ( ) { return contents[pos]; } public Object pop () { return contents[--pop]; { } Logic program facts Class( Stack ). Var( Stack, int, pos ).... Method( Stack, Object, peek, [], {return contents[pos]}). Method( Stack, Object, pop, [], {return contents[--pop]}).... During the application transformation it is important not only to pass over the data about the application structure, but also about the relationships within that structure, the data should form a large connected graph with clear connections. This example shows how the required information about the application structure (classes, methods, variables) and relationships (method, whether the variable belongs to some class) can be transferred. Also any object-oriented language could be rewritten by these facts since they are not tied to specific key words, but are related to the language features (class, interface, method, variable). A different refactoring list could be created for the structured programming languages (procedures, functions) and other groups of the programming languages since the refactoring depend on these groups. Certain refactoring which are important for the objectoriented programs may not be adapted to the structured programs and vice versa. The standard refactorings data about the applications are shown in Table 2. Table 2. The examples of the data used in refactoring operations [13] Representational Mapping Predicate class(c) subclass(p, C) concretesubclass(p, C) abstractmethod(c, M) concretemethod(c, M, B) Description C must be a class class C must be a direct subclass of class P class C must be a concrete subclass of class P M must be an abstract method of class C M must be a concrete method with body B in class C

36 Representational Mapping Predicate classmethod(c, M, B) instancevariable(c, V) objectcreationbody(m, B, C) Description M must be a class method with body B in class C V must be an instance variable of class C body B of method M must create an instance of C These data contain primary information about the program that could not be obtained from inferences. By further manipulating them, it is possible to get various information about the application itself, for instance, to determine whether a class contains at least one method, the number of subclasses of the class etc. By combining these data it is possible to define the refactoring program which while analyzing the data will inform if the program needs to be refactored. 3.2 Refactorings Programs The refactorings programs are defined by means of logical rules. It is rather easy to write the refactoring program in the logic language you simply need to write a syntactically correct task. For example, there is a following refactoring rule: a class needs to be split into two classes (abstract and concrete) if the figure calculated by dividing the number of abstract methods by the total number of the methods within that class is less than a half and does not equal zero (meaning, that there are no abstract methods). After formulating the task in such a way, it can be implemented word by word and the resulting refactoring program is written by Prolog facts as shown in Table 3. Table 3. Classes that are in need of refactoring Refactoring data (Prolog facts): class(a); class(b); method(a, x, true); method(a, y, false); method(a, w, false); method(a, x, false); method(b, xx, true); method(b, yy, true); method(b, ww, true); method(b, xx, false); Refactoring program (Prolog rules): abstractratio(c,n) :- class(c), findall(c,method(c,_,true),list1), countlist(list1,a), findall(c,method(c,_,_),list2), countlist(list2,b), N is A / B, N < 0.5, N =\=0. Refactoring result (Prolog answer):?- abstractratio (Class,Ratio). Class = a, Ratio = 0.25 ; Class = b, Ratio = 0.75 ; false. Explanation: From this answer it can be implied that the class a has less than a half abstract methods (0.25) so it needs to be split into two classes that share its abstract and simple methods. The class b has more than a half abstract methods so it can remain unchanged. This program applies a standard Prolog rule: findall, returning a set of all facts that satisfy the rule. The rule applied for calculation of set members can be implemented in this way: countlist([],0). countlist([_ Tail],N) :- countlist(tail,n1), N is N1+1. It provides a simple way to implement the refactoring which automatically detects all classes that do not meet the abstractness requirement if such a requirement is raised. In order to achieve this, only two types of data are needed: data about the class and the abstractness of their methods. First of all a standard base of facts used for refactoring can be created and constantly supplemented while developing new refactoring program. In this way the creation of new refactoring program will become increasingly easy since a large database will be developed. 3.3 Refactorings Results The results of the refactoring have to satisfy the refactoring task. The refactoring can be written either for informing the programmer about the locations where refactoring has to be performed or to enable the refactoring tool not only to detect the places that need to be refactored but also to perform them by rearranging the source program. In either case, the result of the program has to be interpreted in the same way. Usually the refactoring tool detects the places that need refactoring and the programmer takes care of the implementation of this operation. For example, the refactoring program can detect all classes that are overly-large (according to a

37 certain number of sentences) but it will not able to split them. However, the tool is able not only to detect the unused methods or variables, but also to delete them. 4 Universal Refactoring Tool The defined language of refactoring still lacks a way to obtain the data from the refactored program and a means to interpret the results. The part of the tool that will get the data from the program for performing the desired refactoring will be called the data generator. The part of the tool responsible for interpreting the results will be called the results interpreter. The model of the tool is presented in Figure 1. Figure 1. The model of the Universal Refactoring Tool These parts of the program should be programmed in the same programming language as the examined program, since it is impossible to write a universal program that could get the information about the programs written in any language. This requirement corresponds to the stated universality goal since it is important that the refactorings programs would be language-independent and written in a logic programming language. 4.1 Data Generator Data generator is a program that analyzes the refactored program and gets the information from that program needed to perform the refactoring. The majority of the programming languages (C#, Java, PHP etc.) have Reflection libraries that provide the data describing the program s structure. For example, the fact for getting class information is presented in Table 4. Table 4. Extraction of the Class(Name) fact C# method using System.Reflection;... public void AllClass(Assembly assembly) { Type[] types = assembly.gettypes(); foreach (Type type in types) { Console.WriteLine("class(" + type.fullname + ")"); } } JAVA method import java.lang.reflect.*;... public void AllClass(Class[] classes) { for (Class c : classes) System.out.println("class(" + c.getname() + ")"); }

38 This part of the tool creates all the data necessary to perform refactoring and later can be adapted to any other program, written in that programming language. 4.2 Results Interpreter The results interpreter is a program that performs operations with refactoring results. This program is separately designed for each programming language if there is a need to make rearrangements in the source program. If it is sufficient to show the refactoring result to the programmer, it is possible to develop a universal program that would transform the results of the Prolog rules into those acceptable to the programmer. There are two ways that the results interpreter can rearrange the results, presented in the Table 3, firstly to write the explanation of the results or to implement the modifications within the program. If there is a need to perform the modifications in the program automatically, the most convenient implementation would be the integrated development environment (IDE) plug-ins. There is no need to create a new program since it is possible to use the existing refactorings programs which are not able to detect the flaws in the program but can perform the refactoring which is identified by the programmer while the programmer will be informed by the result of the logic refactoring program. This implementation is the most challenging part of the tool. In most cases it is sufficient to provide an informative answer to the programmer by identifying the place which needs to be refactored, and even to offer the way to perform this operation, but leaving to carry out the refactoring to the programmer. The following chapter presents the example of the refactoring. A detailed analysis is carried out as to what primary data have to be collected, how to write a refactoring program and how to understand the obtained results 5 Acyclic Dependencies Refactoring The term package is used in Java languages. In this language a package is used to collect a logical grouping of declarations that can be imported into other programs. In Java, for example, one can write several classes and add them into the one package. Then other Java programs can use that package and to gain access to those classes [8]. In Microsoft.NET Framework package match library assembly. The relationships between the packages provide very important information about the system. The design of the systems usually starts from the package relationships, then it proceeds down to the package level and then the specification of classes and their relationships is made. A well-designed project contains packages that are connected by the acyclic directed graph. The example analyzed in this chapter is shown in Figure 2. Figure 2. A directed acyclic graph [8] Given such kind of hierarchy, each package can be changed since it is clearly known, which packages are dependent upon it and which are not. For example, if the package MyDialogs is changed, it is clear that the packages MyTasks and MyApplication will also need to be checked. All other packages are not interested whether MyDialogs is changed or not because they do not have any relationship with it. However, if there is a need to change one class of the MyDialogs package and within this class the method of the MyApplication class must be used, then the relationships between packages become interdependent as shown in Figure

39 Figure 3. Package Diagram with Cycles [8] This modification creates a cycle between packages. Such cycles between packages cause major problems. The change of the package within the cycle can lead to unexpected consequences if the programmer is not aware of this cycle. Therefore, when there is a cycle, the changes must be made to all packages if one package needs to be changed. It can be said that the packages within the cycle turn into one large package. Therefore, it is important to avoid such cycles and this can be achieved easily. What you need to do is simply move the method, which is used within the cycle, to the abstract class, then inherit this class, and the class within the cycle simply establishes a relationship with the abstract class which belongs to the same package. In order to solve this problem it is possible to formulate a refactoring task: detect the package cycles and determine in which of the packages the cycle needs to be broken. This can be done by measuring stability of the package [9]: I = Ce / (Ca + Ce), where Ce indicates the number of classes inside this package that depend upon the outside packages, Ca the number of classes outside this package that depend upon the classes within this package. When the I metric is 0, it indicates a maximally stable package because it does not have relationships with the outside classes. When the I metric is 1, it indicates a maximally instable package as no other package depends upon this package but it calls other packages. The more stable the package is the more abstract classes it has to contain; when the I metric is 0, it means that the package contains nothing but abstract classes. This metric could be used when eliminating the cycles within packages. The most stable class must be found for inheriting from the abstract class that would have the methods needed for the class that calls the abstract class. In this way the cycle will be eliminated. Having formulated the refactoring task, now we need to create refactorings data. Firstly, all packages and relationships between the packages are defined: package(myapp). //MyApplication package(mytask). //MyTasks package(mydial). //MyDialogs package(taskwin). //TaskWindow package(win). //Windows packageused( myapp, [taskwin,mytask] ). packageused( taskwin, [win] ). packageused( mytask, [mydial] ). packageused( mydial, [win, myapp] ). packageused( win, [] ). It is not difficult to obtain these data from the refactored program; therefore, the universal tool would easily write the data generator. The most popular object-oriented languages offer Reflection libraries that provide such information. It is sufficient to have these two types of data in order to perform this refactoring. The refactoring program cycles( Package, Reference) would look like this: cycles(x,p) :- cycles(x, X, P,[]). cycles( X, _, [],A) :- member(x,a),!. cycles( X, Y, P, A) :- packageused( Y, ArcList ), member( Z, ArcList ), not(member( Z, A)), P = [Z PTail], cycles( X, Z, PTail,[Z A] ). In order to use the metric that would calculate the stability of the classes, additional data would be needed, inserting information about packages (Package, Ca,Ce): packageused( myapp,3,1 )

40 packageused( taskwin, 1,1 ). packageused( mytask, 1,2 ). packageused( mydial, 2,1 ). packageused( win, 0,2 ). Also an additional rule should be created that would calculate the metric value: packagestability(p,i) :-packageused(p, Ca,Ce), I is Ce / (Ca+Ce). And supplement the main refactoring program: cycles(x,p,i) :- cycles(x, X, P,[]), packagestability(x,i). Now the programmer is aware for which package an abstract class needs to be created in order to eliminate the cycle. This refactoring program can be further extended by incorporating into the refactorings data not only the number of the package relationships but also by specifying which classes call which methods. Having these data it is possible to expand the refactoring so that after detecting the refactored package it would specify which methods need to be implemented after inheriting the abstract classes, in other words, which methods within this package are used by other packages that create cycles. The information required for the expansion of the refactoring is presented as the following data: packageref( myapp, app, calc, mydial). packageref( mytask, task, list, myapp). packageref( mytask, task, count, myapp). packageref( mydial, dial, div, mytask). packageref( taskwin, win, form, myapp). packageref( win, window, visible, taskwin). packageref( win, window, enable, mydial). Thus having a program that detects cycles and having the data about the relationships, it is possible to write a program that could take out all the methods which can be moved to the abstract class as demonstrated in Table 5. Table 5. Refactoring result Refactoring program: abstract(p,c,m,k,i) :- cycles(p,l,i), packageref(p,c,m,k), member(k,l). Refactoring result:?- abstract(pfrom,class,method,pto,i). PFrom = myapp, Class = app, Method = calc, PTo = mydial, I = 0.25 ; PFrom = mytask, Class = task, Method = list, PTo = myapp, I = ; PFrom = mytask, Class = task, Method = count, PTo = myapp, I = ; PFrom = mydial, Class = dial, Method = div, PTo = mytask, I = ; false. This result demonstrates how the cycle emerges. It is advisable to break the cycle by moving the method, existing within the package with the smallest stability ratio, to the abstract class. This information is sufficient to be able to create the results interpreter. It is known, which method needs to be moved to the abstract class and also in which package the relationship must be changed from the former class to the abstract class so that only needed methods would be used, in such a way avoiding the cycle. Such results interpreter would able not only to inform the programmer about the presence of the cycles but also to eliminate them automatically. This example demonstrates a simple way to solve a complex problem which can highly reduce the quality of the programs. This refactoring program is not tied to the Java language; the same program could support the.net Framework languages. It could be achieved by re-writing the data generator and the results interpreter. 6 Conclusions This paper presents a model for performing automated detection of bad smells within the programs and offers a tool that helps apply this model in practice. This model meets the stated requirements it automatically detects bad smells, allows a simple creation of new refactoring, and implements these refactorings regardless of the programming language used. Another advantage of creating refactoring by means of logic programming is that many programmers are familiar with it and it is easy to learn it is sufficient to understand the refactoring logics and then to write the program without any difficulties. It is possible to create a universal refactoring tool that would assist in implementing the refactorings programs. The utilization of such tool would be very convenient and easy and would not require special

41 resources from the programmer. It would only require writing a data generator that would analyze the programs and select the information necessary for performing refactoring. The functions of the results interpreter could be limited to the presentation of information without modifying it and thus allowing the programmer to decide how to change the program in order to eliminate the problem. Having a wide range of refactorings programs, the programmer is able to perform all the implemented refactorings which could be adapted to the analyzed system of programs. The programmer will only need to decide whether he wants to refactor the programs according to the obtained result. References [1] Bulychev P., Minea M. Duplicate code detection using anti-unification. SYRCoSE [2] Bravo F. M. A Logic Meta-Programming Framework for Supporting the Refactoring Process. EMOOSE [3] Demeyer S., Ducasse S., Nierstrasz O. Finding Refactorings via Change Metrics. ACM Press [4] Erni K., Lewerentz C. Applying Design-Metrics to Object-Oriented Frameworks. Third International Software Metrics Symposium [5] Fowler M. Refactoring: Improving the Design of Existing Code. Addison-Wesley [6] Fowler M. Refactorings in Alphabetical Order 9,95KB [7] Goldstein M., Feldman Y. A., Tyszberowicz S. Refactoring with Contracts. AGILE 2006 Conference [8] Martin R. C. Granularity. C++ Report. 1996, volume 8, number 10. [9] Martin R. C. Large scale Stability. C++ Report. 1997, volume 9, number 2. [10] Meffert K. Supporting Design Patterns with Annotations. 13th Annual IEEE International Symposium [11] Mens T., Van Eetvelde N., Demeyer S., Janssens D. Formalizing Refactorings with Graph Transformations. Copyright Wiley [12] Mens T. On the Use of Graph Transformations for Model Refactoring. GITTSE [13] Mens T., Tourwe T. A Declarative Evolution Framework for Object-Oriented Design Patterns. ICSM [14] Mens T., Taentzer G., Runge O. Analysing Refactoring Dependencies Using Graph Transformation. Software and System Modeling. 2007, volume 6, number 3. [15] Simon F., Steinbrückner F., Lewerentz C. Metrics based refactoring. IEEE Computer Society [16] Tourwe T., Brichau J., Mens T. Using Declarative Metaprogramming To Detect Possible Refactorings. ASE [17] Tourwe T., Mens T. Identifying Refactoring Opportunities Using Logic Meta Programming. CSMR [18] Tourwe T., De Volder K., Brichau J. Logic Meta Programming as a Tool for Separation of Concerns. ECOOP [19] Wuyts R., Ducasse S. Symbiotic Reflection between an Object-Oriented and a Logic Programming Language. ECOOP

42 TRANSITION FAULT TEST GENERATION FOR NON- SCAN SEQUENTIAL CIRCUITS AT FUNCTIONAL LEVEL Eduardas Bareisa 1, Vacius Jusas 1, Kestutis Motiejunas 1, Rimantas Seinauskas 1,2 1 Kaunas University of Technology, Department of Software Engineering, Studentu , LT Kaunas, Lithuania, eduardas.bareisa@ktu.lt, vacius.jusas@ktu.lt, kestutis.motiejunas@ktu.lt 2 Kaunas University of Technology, Information Technology Development Institute, Studentu 48A, LT Kaunas, Lithuania, rimantas.seinauskas@ktu.lt Abstract. The paper presents two functional fault models that are devoted for functional delay test generation for non-scan synchronous sequential circuits. The sequential circuit is represented as the iterative logic array model consisting of k copies of the combinational logic of the circuit. The value k defines the length of clock sequence. The method that allows determining the length of clock sequence is presented. The experimental results demonstrate the superiority of the delay test patterns generated at the functional level using the introduced functional fault models against the transition test patterns obtained at the gate level by deterministic test pattern generator. Especially, the functional delay test generation method is valuable for the circuits, when the long test sequences are needed in order to detect transition faults. Keywords: sequential non-scan circuit, transition fault test, iterative logic array, functional level. 1 Introduction Transition fault testing of sequential circuits has mostly been considered assuming scan that allows a circuit to be tested similar to a combinational one. Two test vectors are applied to detect transition faults, namely v1 and v2. The primary scan based test techniques are enhanced scan [8], functional justification also called broadside test [16], and scan shifting also called skewed load [11]. All of these techniques use slow and rated clock periods. Slow clock period is used for generation and application of vector v1, as well as for generation of vector v2. The rated clock period is used for application of vector v2 only. Many sequences can be applied for testing scan based circuits, which can not be possible during its normal operation. This leads to over-testing of the circuit, which not only increases the test application time, but could also result in loss of yield [15]. Over-testing may become more prominent when transition faults are targeted compared to over-testing of stuck-at faults [6]. Testing of a delay fault in a non-scan sequential circuit requires more than two vectors. Two methodologies can be applied: variable clock [9] and rated clock [5]. In the variable clock non-scan sequential test methodology, the vector pair should be like the one used in the scan based test methodology. But, the vector v1 should be generated by a set of vectors starting at some initial state. This set is called a justification sequence. If the destination path is a flip-flop then the state should be propagated to some primary outputs. This part of the test is called a propagation sequence. The slow clock is used for justification and propagation sequences. Thus, only one vector v2 in the entire test sequence uses the rated clock. The rated-clock non-scan sequential test is the most natural form of the test. All the vectors, either functional or those generated to cover any types of faults, are applied at the rated clock. The variable-clock test is always possible for a fault that is testable by a rated-clock test [9]. However, some variable-clock tests may cover paths that are impossible to activate in the normal rated-clock function. Under scan-based tests, transition faults are associated with an extra delay that is large enough to cause the delay of any path through the fault site to exceed the clock period [4]. Beyond this assumption, the specific delay size is not important. When non-scan test sequences are applied at-speed, a faulty line must be considered under multiple consecutive fast clock cycles. In this case, it becomes necessary to consider fault sizes measured in numbers of clock periods in order to determine the value of a faulty line. In the transition fault model introduced in [7], each transition fault in the combinational logic of the circuit defines several faults with different extra delays. We refer to a transition fault with a given extra delay of n clock periods as an n-transition fault. An alternative model, which is called an unspecified transition fault, to the one of [7] was introduced in [12]. This model attempts to encompass all the possible sizes of a transition fault in one fault. Under an unspecified transition fault, an unspecified value is introduced at the fault site in the faulty circuit when the fault is activated or when a fault effect is propagated from a previous time unit. Fault detection potentially occurs when an unspecified value reaches a primary output. But the simulation of unspecified values using three-value logic has an inherent loss of the accuracy [12]

43 Experimental results reported in [7] and [13] indicate that one-transition faults are the hardest to detect. Moreover, tests for one-transition faults can detect most of the n-transition faults for n > 1. Therefore, it is possible to conclude that there is no need to construct transition tests for n-transition fault, where n > 1. This conclusion is supported by a new model of transition faults, which is introduced in [14]. The model, which is called double-single stuck-at fault, requires the activation of single stuck-at faults with opposite stuck-at values on the same line at consecutive time units. In addition, it requires the detection of both faults (as single faults) at the same or later time units. The transition fault test for non-scan sequential circuits could be constructed at the functional level using the software prototype model, as well [3]. Bareiša et al. [3] introduced three different new fault models: the functional clock at-speed (FCaS), the functional clock static-based (FCS), and the functional clock delay (FCD). According to the proposed models, the functional faults are considered on the primary inputs and primary outputs of the model only. The number of the functional faults is independent of the length of the clock sequence. Later our acquired experience showed that such the models do not allow to achieve good quality results for the transition test generation. Therefore, in this paper, we are going to improve FCaS fault model, which would take advantages of the iterative logic array. The object of the paper is to present the delay fault test generation approach using the software prototype model. The rest of the paper is organized as follows. We present the functional fault models in Section 2. We introduce the test generation approach in Section 3. We report the results of the experiment in Section 4. We finish with conclusions in Section 5. 2 Fault model A synchronous sequential circuit can be transformed into an iterative logic array [10]. The iterative logic array model of a synchronous sequential circuit consists of duplicated copies of the combinational logic of the circuit, called time frames, as shown in Figure 1. The iterative logic array model for the circuit is expanded for k time frames. The vertical inputs of a combinational cell are primary inputs and the vertical outputs are primary outputs of the sequential circuit; the horizontal inputs are the present state bits and the horizontal outputs are the next state bits. PI (t1) PI (t2) PI (tk) PS PS PS C C C NS NS NS PO (t1) PO (t2) PO (tk) Figure 1. The iterative logic array model We call the number of the cells (time frames) of the iterative logic array a period of clock sequence. In such a model, the number of primary inputs is multiplied by k, the number of primary outputs is multiplied by k, the number of previous state bits, which are considered as the primary inputs, is multiplied by k, the number of next state bits, which are considered as the primary outputs, is multiplied by k. We obtain the model of the sequential circuit, which is expanded quite a lot, but all the control of the model is included into the interface. Let a one generic cell of the iterative logic array model have a set of primary inputs X = {x 1,..., x i,..., x n }, a set of primary outputs Y = {y 1,..., y j,..., y m }, a set of bits of previous state Q = {q 1,..., q j,..., q v }, and a set of bits of next state P = {p 1,..., p j,..., p v }. The number v is the same for the bits of previous and next states. Therefore, the input stimulus has n+v signal values, and the output stimulus has m+v signal values. We do not relate the inputs and outputs to the time frame, but we behave quite different when we consider the input stimulus and output responses. We denote the complete input stimulus of the cell of the time frame t by S t = <s t 1,, s t i,, s t n+v>. The complete output response captured on the outputs of the cell of the time frame t is R t = <r t 1,, r t t,, r t m+v>. When we refer to the input stimulus of the whole iterative logic array, we do not use the upper index t. We define the functional faults for one generic cell, but they will be applied for every cell in the iterative logic array model. The functional faults are separated into two groups: primary and secondary. The definition similar to the description presented in [2] is introduced. Definition 1. The primary functional fault is a tuple of stuck-at faults (x i f, y j h ), f=0,1, h=0,1. Definition 2. The secondary functional fault is a tuple of stuck-at faults (x i f, p j h ), f=0,1, h=0,1. Now, we are concerned how to use these functional fault models for the detection of transition faults. Remember the description of the detectability of the functional fault [2], which we present here as a definition. Definition 3. The functional fault (x i f, y j h ) is detected by test stimulus S under the following conditions:

44 1. The test stimulus S detects the single fault x i stuck-at f. 2. The fault free value of output y j under S is h. 3. In the presence of x i stuck-at f, the value of output y j is h. Such a definition is valid for the detection of stuck-at faults. In order to adopt Definition 3 for detection of delay faults in iterative logic array model we have to take into account the following peculiarities: 1. The iterative logic array model consists of k cells; meanwhile the functional faults described in [2] were used for a single combinational cell. 2. The functional faults are defined for a one generic cell. 3. The fault effect can start at the inputs of the cell t and it can be observed at the outputs of the same cell t or at the outputs of the cells that are located further in the chain of the cells. 4. The stuck-at faults can be injected at the inputs of all the cells and the responses can be observed at the outputs of all the cells. 5. The bits of previous and next state are not real primary inputs and outputs. 6. The delay fault has to be detected. In order to detect the delay fault a transition has to start at the fault site. Bearing in mind the enumerated peculiarities, we introduce the following definition that names the necessary conditions for detection of transition faults using the model of primary functional fault. Definition 4. The primary functional fault (x i f, y j h ) is detected by test stimulus S under the following conditions: 1. The test stimulus S detects the single fault x i stuck-at f on the input of the cell t. 2. The fault free value under S at the output y j of the cell t or the cells t+1, t+2,, k is h. 3. In the presence of x i stuck-at f on the input of the cell t, the value at the output y j of the cell t or the cells t+1, t+2,, k is h. 4. The fault free value under S at the input x i of the cell t-1 is f. The last condition of Definition 4 guarantees that the transition starts at the input x i of the cell t. The first three conditions ensure that the sensitive path exists between the input x i of the cell t and the output y j, which can be an output of one of the following cells t, t+1,, k. One could think that the use of the primary functional fault for detection of transitions faults is sufficient. But, usually the primary input and the primary output are connected by the number of the different paths [1]. Therefore, the functional faults have to be defined in such a way that would allow sensitizing the most as possible of the different paths of the circuit. For this purpose, the secondary functional fault was defined. But the secondary functional fault does not relate the primary input to the primary output. Consequently, it alone can not ensure the propagation of fault effect from the primary input to the primary output. The additional functional fault has to be linked into the chain with secondary functional fault. Now, we can formulate a definition that determines the necessary conditions for detection of transition faults using the model of secondary functional fault. Definition 5. The secondary functional fault (x i f, p j h ) is detected by test stimulus S under the following conditions: 1. The functional fault satisfies the conditions of Definition 4 and it is detected at the output p j of cell t. 2. The functional fault (q i f, y j h ), where q i denotes the input of the cell t+1 directly connected to the output p j of the cell t, and p j h = q i f, has to be detected according to the conditions of Definition 4, except the fourth condition. The delay test generation using the secondary functional fault allows sensitizing the paths connecting every bit of state to the primary output. The detection of the functional delay faults can be represented by the detection matrix D= d a,b 2n,2(m+v), where index a is used to denote the inputs of the cell, and index b is used to denote the outputs of the cell. The bits of previous state are not represented in the matrix, because the appropriate functional faults are not considered. The entry of the matrix d a,b :=1 if the corresponding functional delay fault is detected, d a,b :=0 in the opposite case. Each input/output (i, j) pair is associated with four entries of the matrix d 2i-1,2j-1, d 2i-1,2j, d 2i,2j-1, d 2i,2j that correspond to the primary functional delay faults (x i 0, y j 0 ), (x i 0, y j 1 ), (x i 1, y j 0 ), (x i 1, y j 1 ), when i=1,..., n, and j=1...,,m., and the secondary functional faults are represented by the pairs (x i 0, p j 0 ), (x i 0, p j 1 ), (x i 1, p j 0 ), (x i 1, p j 1 ), when i=1,..., n, and j=m+1...,,m+v.. The entry of the matrix d 2i-1,2j-1 is set to 1 if the primary functional delay fault (x i 0, y j 0 ) is detected. That corresponds to the situation where the transition 0 1 is on the input i, the transition 0 1 is on the output j, and the blocked transition on the input disables the transition on the output. The entry of the matrix d 2i-1,2j is set to 1 if the primary functional delay fault (x i 0, y j 1 ) is detected. That corresponds to the situation where the transition 0 1 is on the input i, the transition 1 0 is on the output j, and the blocked transition on the input disables the transition on the output. The entry of the matrix d 2i,2j-1 is set to 1 if the primary functional delay fault (x i 1, y j 0 ) is detected. That corresponds to the situation where the transition 1 0 is on the input i, the transition 0 1 is on the output j, and the blocked transition on the input disables the transition on the output. The entry of the matrix d 2i,2j is set to 1 if the primary functional delay fault (x i 1, y j 1 ) is

45 detected. That corresponds to the situation where the transition 1 0 is on the input i, the transition 1 0 is on the output j, and the blocked transition on the input disables the transition on the output. In the same way, the detection of the secondary functional faults is labeled when they are detected according to Definition 5. 3 Test generation process Delay test generation is accomplished at the functional level. The model of the circuit has to be described in a high level description code, which is termed a software prototype. Therefore, it can be presented in the form of a high level programming language, behavioural VHDL or Verilog description code. But the reality is such that the models of the circuits usually are available in the RTL level description code, for example ITC 99 benchmark suite. Such the models have to be lifted up into the algorithmic level of the description. In order to achieve this goal there are several ways: 1) to write the model in C programming language; 2) to translate from VHDL or Verilog RTL code to the code in C programming language; 3) to translate from VHDL or Verilog structural code to the code in C programming language. There are possible other alternatives, but we did not consider them. We have tried to write the models in C programming language for all the benchmarks from ITC 99 benchmark suite. But we did not achieve our goal, because the adequacy for all the models practically is not possible to ensure. The second way, the most attractive and the reliable way was eliminated as not possible for two reasons: 1) it is difficult to think of the rules that would allow to convert several parallel processes into sequence of the operators in C programming language; 2) such a way contradicts to the whole design process, which flows from algorithmic level to more detailed RTL level. The third way looks little bit a strange, but the synthesized structural descriptions are available for all the benchmarks. We have written a translator from Verilog structural description code into code in the C programming language. The rules of the translation are very simple, because every Verilog primitive (and, or, not, nand, nor) can be substituted by appropriate operator of C programming language. Such a model has a single deficiency only it is very large for large circuits. Therefore, the productivity of the test generation program suffers quite a lot. Usually the reset and clock signals are present in the RTL level description code. The values of the reset and clock signals change according to regular law. Therefore, these inputs have to be excluded from the consideration. The values according to regularities of these inputs have to be supplied later when the final test is obtained for the sequential circuit. Table 1. Parameters of circuits Circuit Number of primary inputs Number of primary outputs Number of state bits Number of flipflops Number of inputs of fault model Number of outputs of fault model b b b b b b b b b b b b b b In order to use the introduced fault models, the state bits have to be extracted from the model of the circuit. In a high level description code, the state bits are represented by the variables. The declared type of the variable determines the number of bits required for the variable. But not all the variables represent the state; some of them are used for temporary storing of the values only. The careful analysis of the code is needed in order to determine which variables are temporary. Synthesis of the code could aid to resolve this problem. Consider an example. Let us use the RTL level code of circuit b01 from ITC 99 benchmark suite presented in VHDL hardware description language. We find a single declared variable in the code:

46 variable stato: integer range 7 downto 0; Our knowledge of VHDL language allows us to determine that three state bits are required for the variable stato. Let us examine the synthesized description of circuit b01. We find five flip-flops. In order to find out the problem of difference between the number of state bits and the number of flip-flops we examine the synthesized code of b01. We discover that two additional flip-flops are connected to two primary outputs of the circuit. Such flip-flops form buffer zone. The buffer zone can be formed on the primary inputs as well. But the flip-flops of buffer zone have no influence to the length of clock sequence. Therefore, the initial determination that three state bits are required for the circuit b01 was correct. We could say in advance that all the circuits from benchmark suite ITC 99 have the buffer zone of flip-flops at the primary outputs, except the circuit b05. The parameters of circuits from the benchmark suite ITC 99 are presented in Table 1. We have to pay attention that the number of inputs of fault model does not count the reset and clock signals that are present in all the circuits. Analysis of the VHDL code of the benchmarks presented in Table 1 revealed that the code of circuits b04, b05, b08, b12, b14 has temporary variables. In the code of circuits b07 and b10, we learnt that some bits of declared variables are never used. Therefore, the number of state bits according to our calculation is more than the number of flip-flops minus the number of primary outputs. The problems of the circuit model are resolved; we present a delay test generation process. The delay test generation process consists of two stages: determination of length of clock sequence, and test pattern generation. The first stage is a very important stage, because the non-scan sequential circuit is represented by the iterative logic array containing k combinational copies of the sequential circuit. In other words, k denotes the length of clock sequence. Every sequential circuit especially at the algorithmic level can be represented as a finite state machine. The finite state machine is always synchronized by clock sequence of some defined length. If the length of clock sequence is too short, some states will not be visited, and the appropriate delay faults will not be detected. If the length of clock sequence is too long, some states will be visited repeatedly but that will not sensitize the new paths, and the new faults will not be detected. Too long clock sequence increases the number of test patterns in the test sequence quite substantially but without a necessity. Therefore, the number of copies k directly influences the success of test pattern generation. In order to determine the length of clock sequence we use the fault models presented in Section 2. The secondary functional fault model shows the ability to sensitize the state bit. Of course, we understand that all the state bits have to be controlled by the values on the primary inputs. Therefore, the secondary functional fault model serves as the first criterion in choosing the correct length of clock sequence. We count the number of uncontrollable state bits according to the secondary functional fault model. The goal is that this number would become equal to zero. The increase of the length of clock sequence allows us to converge to this goal. This goal is not always reachable. Sometimes, we increase the length of clock sequence quite substantially to several thousands, but some state bits still remain uncontrollable. When we reach the goal or we see that it is not possible to control all the state bits with acceptable length of clock sequence, the primary functional fault model is used as the second criterion in choosing the correct length of clock sequence. Then we count the number of detected faults according to both criteria. We never know what the number of detectable faults is. Therefore, the goal is to reach the number of detected functional faults as larger as possible. We stop the increase of the length of clock sequence, when the number of detected faults does not augment or the growth is very small. To start the delay test generation the circuit is assumed to be initialized to state 0 before the application of the input sequence. If the circuit has a reset input, then it can be set to state 0. If the circuit has no the reset input, the synchronizing sequence could be used, which transfers the circuit to known initial state that could be different from the state 0. For example, all the benchmarks from ITC 99 suite have the reset input; meanwhile the benchmarks from ISCAS 89 suite have no reset input. The problem of the generation from the initial state, which is not the state 0, is out of scope of this paper. When the length of clock sequence is set, the next step is a delay test pattern generation. The generation is implemented according to the iterative logic array model of the circuit and introduced functional fault models. Random values are generated on the primary inputs. Simulation is carried out of one cell of the iterative logic array. Simulation defines the values on the primary outputs and the values of the next state, which become the previous state values for the next cell of the iterative logic array. The introduced functional faults are simulated on every cell of the iterative logic array. The number of testable functional faults is unknown. The solution to the problem of stopping the functional delay test generation using the detection matrix is presented in [1]. 4 Experimental results The experiments were carried out on the circuits of the benchmark suite ITC 99. We report the detailed process of the determination of the length of clock sequence for the circuit b01 in Table 2. The length of clock sequence is directly related to the functioning of the circuit. In order to easier understand the whole process of the determination of the length of clock sequence we present the state transition graph of circuit b01 in Figure 2. The state transition graph has 8 states. The names for the states are given according to VHDL model of circuit b01. The circuit has 2 primary inputs (the reset and clock inputs are not counted). The values are shown only on those edges, where equal values are required on both inputs. These values indicate that the path traversing the

47 states of the circuit will be chosen more likely by the edges that do not have the values, because the generation of the values is random. As you can see in Table 2, we start the generation with the length of clock sequence, which equals to 4. Knowing the function of the circuit, it was possible to predict that some state bits will be uncontrollable. This value was chosen in order to show that the model allows counting of uncontrollable state bits. Then, we double the value of the length of clock sequence. The law of doubling is used always in the search of the proper length of clock sequence. In this search, we find two least values that would fit for the proper length of clock sequence. Value 9 is found only, because we know the functioning of the circuit. The algorithm indicates the value 15; the intuition suggests the value 9, therefore, the decision was made to generate test subsequences for both lengths of clock sequences. In order to reduce the factor of randomness the generation was carried out two times. The last three columns show the results of these generations. We did not use the term test subsequence, which is shown in the fourth column, in the text before. The test subsequence is a sequence of input patterns, which corresponds to one period of clock sequence. Every test subsequence starts with reset test pattern. The last column indicates the results, when the reset test pattern was excluded from the last test subsequence. Such idea was insinuated by the knowledge of sequential circuits functioning and it was confirmed by the experiments. The exclusion of the reset pattern for the last subsequence allowed obtaining the higher fault coverage in three cases out from four. As the results show, the intuition was right the length of clock sequence should be 9. Table 2. Determination of length of clock sequence for b01 Length of clock sequence Number of uncontrollable state bits Number of detected functional faults Number of test subsequences Fault coverage at gate level (%) reset a 1 f 1 0 g 0 wf e b c wf1 Figure 2. The state transition graph of circuit b01 The similar process of the search for the proper length of clock sequence was carried out for all the circuits presented in Table 3, but we do not provide the details. The stress is made now on the functional delay test pattern generation. The transition test patterns were generated at the gate level by TetraMAX program. The results are reported in the first two columns following the column of circuits names. The next four columns are devoted to the results of functional delay patterns. As it was in Table 2, the fault coverage is provided twice. The column with number 2 in the title gives the fault coverage, when the reset pattern is excluded from the last test subsequence. Such a procedure allows obtaining the longer twice the clock sequence

48 We see that the fault coverage of functional delay test patterns is larger in comparison with the fault coverage of transition test patterns obtained at the gate level for all the circuits, except the circuit b01. Especially, the good results are for the circuits b11, b12, b13 and b14, where the long test sequences are needed in order to detect the transition faults. The transition fault test generation at the structural level for non-scan circuits meets a lot of difficulties. The restrictions are applied during the transition fault test generation in order to achieve the test patterns in reasonable time. One them was the time reserved for the generation of test pattern for one fault. It was set to 100s. But no limit was set for backtracking. Nevertheless, the transition test pattern generation for the circuit b12 took 9 hours. It could take more time, but the transition test generation was interrupted by the user. The functional delay test generation never exceeded one hour. We could confess that the thorough work was needed in order to select the proper length of clock sequence, because the process is not automatic yet. We see that the length of clock sequence is quite large for the circuits b12 and b14, and it is very large for the circuits b11 and b13. The transition fault coverage of joint transition and functional delay test patterns is provided in the last column. The joint test patterns obtained the better result for circuit b10 only, where the initial fault coverages were almost equal. Table 3. Functional delay test patterns Circuit Transition test patterns at gate level Number Fault coverage (%) Length of clock sequence Functional delay test patterns Number of test subsequences Fault coverage 1 (%) Fault coverage 2 (%0 United fault coverage (%) b b b b b b The obtained results of functional test generation can be compared with the results provided in [14]. The results of the circuits b10 and b11 are reported in [14]. Our method of functional delay test pattern generation allows to obtain the better result for the circuit b10 (76.55% in [14]), and it looses a little bit for the circuit b11 (79.13% in [14]). 5 Conclusion We presented two functional fault models that are devoted for functional delay test generation for nonscan synchronous sequential circuits, namely the primary functional fault model and the secondary fault model. The first model deals with stuck-at faults on the primary inputs and primary outputs. The second model deals with stuck-at faults on the primary inputs and state bits. The circuit is represented as the iterative logic array model, consisting of k copies of the combinational logic of the circuit. The value k defines the length of clock sequence. The method that allows determining the length of clock sequence was presented. The obtained results show that the introduced delay test generation method using the presented functional fault models outperforms by the fault coverage the transition test patterns obtained at the gate level by deterministic test pattern generator. Especially, the introduced delay test generation method obtains good quality results for the circuits, when the long test sequences are needed. References [1] Bareisa E., V. Jusas, K. Motiejunas, R. Seinauskas. Test Generation at the Algorithm-Level for Gate-Level Fault Coverage. Microelectronics Reliability, 2008, Volume 48, Issue 7, pp [2] Bareisa E., V. Jusas, K. Motiejunas, R. Seinauskas. Functional Delay Test Generation Based on Software Prototype. Microelectronics Reliability, 2009, Volume 49, Issue 12, pp [3] Bareiša E., V. Jusas, K. Motiejūnas, R. Šeinauskas. Functional Delay Clock Fault Models, Information Technology and Control, Kaunas, Technologija, 2008, Vol. 37, No. 1, pp [4] Barzilai Z., B. Rosen, Comparison of AC Self-Testing Procedures, Proceedings of the IEEE International Test Conference, 1983, pp [5] Bose S., V.D. Agrawal, Sequential Logic Path Delay Test Generation by Symbolic Analysis, Proceedings of the 4 th Asian Test Symposium, Nov. 1995, pp

49 [6] Chen G., S. M. Reddy, I. Pomeranz. Procedures for Identifying Untestable and Redundant Transition Faults in Synchronous Sequential Circuits, Proceedings of the 21st International Conference on Computer Design (ICCD 03), 2003, pp [7] Cheng K.-T. Transition Fault Testing for Sequential Circuits, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 12, no. 12, Dec. 1993, pp [8] Dasgupta S., R. G. Walther, T. W. Williams, E. B. Eichelberger. An Enhancement to LSSD and Some Applications of LSSD in Reliability, Availability and Serviceability, Proceedings of the International Symposium on Fault Tolerant Computing, 1981, pp [9] Majumder S., V. D. Agrawal, M. L. Bushnell. Path Delay Testing: Variable-clock versus Rated-clock, Proceedings of the 11th International Conference on VLSI Design, Jan. 1998, pp [10] Muth P. A nine-valued circuit model for test generation, IEEE Transactions on Computers, vol. 25, no 6, June 1976, pp [11] Patil S., J. Savir. Skewed Load Transition Test: Part I, Calculus, Proceedings of the IEEE International Test Conference, 1992, pp [12] Pomeranz I., S. M. Reddy. A Delay Fault Model for at-speed Fault Simulation and Test Generation, Proceedings of the IEEE/ACM International Conference on Computer-Aided Design, Nov. 2006, pp [13] Pomeranz I., S. M. Reddy, Unspecified Transition Faults: A Transition Fault Model for At-Speed Fault Simulation and Test Generation, IEEE Transactions On Computer-Aided Design of Integrated Circuits and Systems, Vol. 27, No. 1, January 2008, pp [14] Pomeranz I., S. M. Reddy. Double Single Stuck-at Faults: A Delay Fault Model for Synchronous Sequential Circuits, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Vol. 28, No. 3, March 2009, pp [15] Rearick J. Too much Delay Fault Coverage is a Bad Thing, Proceedings of the IEEE International Test Conference, 2001, pp [16] Underwood B., W.-O. Law, S. Kang, H. Konuk. Fastpath: A Path-Delay Test Generator for Standard Scan Designs, Proceedings of the IEEE International Test Conference, Oct. 1994, pp

50 INITIALIZATION OF SEQUENTIAL CIRCUITS USING SOFTWARE PROTOTYPES Kestutis Morkunas, Rimantas Seinauskas Kaunas University of Technology, Department of Software Engineering, Studentu str. 50, Kaunas, Lithuania, Abstract. In this article, sequential circuit reset and initialization problem is presented. A method and algorithm is proposed for finding shortest length reset sequences using circuit emulating software prototypes. Using a software prototype gives the benefit and possibility of early test case generation. A reset sequence is able to switch circuit to a know state, regardless of the initial state. In this work, finding a reset sequence consists of using a software prototype which emulates an actual circuit. The proposed method and algorithm use randomly generated sets of circuit states and input signals, finding the best reset candidate and the validation of solution. ISCAS'89 benchmark sequential circuits were used for experiments. The results are provided within the article. It shows, that this method can achieve better, or at least as good as results compared to other algorithms, even though this method operates under more difficult conditions. Keywords: sequential circuit, reset, partial reset, software prototype, initialization, reliability. 1 Introduction Present-day circuits manufacturing is expected to deliver top-class product quality in the shortest time frame possible. Product might be finished faster by reducing time needed for various stages of crafting process, like design, implementation, test generation, etc. This can be done by using automated design tools, various chip size optimization techniques and more accurate, reliable and speedy tests. Concurrent design and test generation may come in hand. Normally, a circuit is tested after it is synthesized and burned on a chip. This step is important, and can not be avoided. The manufacturing process is displayed in Figure 1. Best case scenario is when test cases are ready at the time of chips completion and the test generation process does not require any more time. Various software prototypes emulating the designed chip can be made before and during the specification generation phase. It helps to find design flaws and detect errors early in the process. [10] Chip fabrication process is as follows: Figure 1. Stages of chip manufacturing. Test generation can be moved next to specification generation and design capture phases. Test cases are generated replacing not-yet-existing chip with it s software prototype [9]. To test a chip or it s software prototype, a set of valid input signals must be known. System boots up into unknown state. To start testing, system must be in a fixed state. Only then signal inputs can be sent and results evaluated. If expected and actual results match the test passes. If not, there is a fault in fabricated chip. To switch system into a fixed state, an initialization pattern must be found. There are two main ways to reset a system into a fixed state: send a reset signal or signals; use a DFT (Design for testability) methods. DFT use additional input line/lines to send required logical value straight into memory elements. Therefore, all required memory elements can be set to a known and needed state. Reset sequence sends a system from any/random booted state into fixed state. Both methods have their advantages and disadvantages. DFT offers fast and convenient solution, but it has it s disadvantages extra inputs and additional hardware are required for memory elements. By allowing to set the system into required state, DFT allows better chip coverage while searching for faults. Reset sequence does not require extra input lines, but the fixed state it transfers system to might not be enough. Some critical chip sections and paths might get left out and/or unreachable from this fixed state, and untested

51 Test case generation for testing a software prototype and test pattern generation for a manufactured system-on-chip differs. 3-value logic (0, 1, x unknown) used by test generation on gate level for hardware testing is a problem when testing software prototypes. Using this logic in software prototypes explodes the number of if statements required to emulate the chip s logic. This limitation increases the difficulty of the problem. Test generation using internal chip s structure is not available at the design phase of the chip, as only software prototypes may exist. The chip itself is not manufactured yet, and the exact number of memory elements is not known yet. It will be the fabrication algorithm which decides on how to assemble available logical elements, so that they do the required calculations [11]. If the chip s internal architecture is available, it is popular to analyze it and determine various factors influencing its results, like logic element placement priorities and such. By removing non essential logical elements or merging them into separate entities it is possible to reduce the chip size and the total state space. This reduction results in a smaller scope. A reset sequence is needed to generate test cases for the chip emulating software prototype A set of input signals turn system s memory elements into known state, which is unknown at the time of chip s powerup. Therefore, without a known starting state a chip emulating software prototype may not be tested. It is possible that system-on-chip fires-up into a known state, which eliminates the need of a reset sequence [10]. 2 Preceding work A large portion of the research work is devoted to circuits initialization using chip s known internal structure and the connections between logical elements. In this case, irrelevant elements and element groups may be removed, identical states identified (and removed, thus reducing the overall chip size), relations between memory elements investigated and so on. Chip s structure is unavailable when using software prototypes which emulate a system-on-chip. Therefore some of the preceding works create solutions that are not applicable whilst using prototypes [11]. According to [1], a reset sequence can be difficult or impossible to find due to logical and conditional loops present in manufactured chip s structure. Structural self-loop forms when an input of a flip-flop or a trigger is determined by its output alone. In this case, trigger is removed from the main structure using a reset line. The logical self-loops are the structural loops that appear after part of triggers are already synchronized.[1] The authors suggest using partial reset (DFT technique), if a full reset is not found or not existing. Part of all triggers is selected for a forced reset. They are selected after the circuit is analyzed and logical or conditional loops found. According to [1] this technique allows to synchronize largest ISCAS 89 circuits. A synchronization tree method may be used to generate all the resetting sequences of the shortest length [2]. This method checks if the circuit is synchronizable first. All input combinations are required to form such a tree, therefore this method is not practical for large chips. Wehbeg and Saab use logical and functional synchronization in their work [3]. Partitioning of the chip is used, considering relationships between the memory elements. Logical method is tried first. If it fails functional is used. Reset sequences have their disadvantages; i.e. a set of states that can be reached using reset sequences may be limited [4]; critical states that must be tested may become unreachable. A reset sequence may not exist because of existing chip fault. In these cases, a DFT technique is needed to control the chip state and detect faults whose detection requires specific states. Using DFT may result in overtesting, as the reset line used may bring the system into a state that will never be reached during working operation of a circuit. M. Kein offers a method to quickly check if a given circuit is resetable or not [5] An approach to finding reset sequences using ant colony optimizations is also presented [6]. It appears, that nature solved the problem of finding the shortest path from point A to point B pretty well. This article explains how ants find shortest route to food supplies scattered around in vast territories. It appears that they use specific scent pheromone, which is left along the trail by the ant scouts. Other ants follow this scent, which marks the shortest/best path found. The scent decays with time, and must be renewed. If a new better path is found, the path is altered. It is called pheromone updating. In this article, such approach is used to find shortest reset sequences in systems-on-chips. System-on-chip s internal architecture is used. Results are provided for two different algorithms [6] and [7]. Another algorithm is presented in [8] for partial reset of large circuits, as well as results for ISCAS 89 circuits. Chip s internal architecture is used. A chip may not have a reset sequence. Various methods end up offering partial reset approach as a faster and simpler way to achieve results. A partial reset approach is a method of using separate reset lines to initialize hard to reset or impossible to reset triggers [8]

52 3 Calculation scope and data sets reduction The amount of calculation required for building a binary tree or full scan of large circuits grows exponentially as the number of triggers increase. 5 inputs and 3 triggers would result in a state space size of 8 and 32 inputs. 256 computation cycles are required to test state to state transitions for all possible input/state combinations. 14 inputs and 6 triggers result in computation cycles. s35932 circuit from ISCAS 89 would require 5, e+530 computation cycles for a full test. In this article we present a basic algorithm based on software prototype emulating operation of a circuit. It is based on random generation of input signals and trigger states. After a reset candidate is found, it is validated using greatly increased state space. Generated sets are only a part of the whole set of possible inputs and states. Therefore, all results are based on heuristics or assumption. The primary reduction in generated sets removes the need for huge amounts of calculations mentioned above. If reset candidate passes validation, one can assume that a reset sequence is found. This may or may not be true, as only part of whole state space has been tried. Circuit gate level model use 3-value logics (0,1, x-unknown), where software prototype can only use 0 and 1. This increases the number of checks required. The proposed algorithm used various ISCAS 89 circuits to generate experimental results. Some circuits may be initialized using 1 input pattern, while others require more. Some circuits may not be fully initialized using a reset pattern. In this case, a partial reset approach may be an option [1]. 4 Proposed algorithm Full circuit test is a difficult task, as explained in previous section. Therefore, a reduced size approach was used in reset sequence finding algorithm. The main idea behind it is: if the algorithm finds a reset sequence for small set of trigger states, and validates the same reset signal against much larger set of trigger states, then we may say this is a resetting sequence for the circuit used. Once again this method is based on assumptions. Not all possible input signals and triggers states were tested, so, there is always a possibility, that results may or may not be correct with sets that were not used in calculations. Pseudo description of algorithm is provided in Figure 2. Begin; generate_medium_sized_set_of_inputs(); generate_small_sized_set_of_trigger_states(); foreach(input_signal){ foreach(trigger_state){ calculate_circuit_output(); } is_found_reset() {grab_reset_seq_candidate();} else { take_best_reseting_input(); increase_reset_sequence_lenght(); } } generate_vlarge_sized_set_of_trigger_states(); foreach(trigger_state){ calculate_circuit_output(using single input reset candidate); } if is_found_reset() validation_successfull(); End; Algorithm steps are as follows: Figure 2. The pseudo code of proposed algorithm 1. Algorithm generates a medium sized set of input patters to send into the software prototype emulating the circuit. More input patterns increase the chance to find best-possible reset candidate, but also increases number of calculations required and time required to produce results. In experiments, a set of 300 input patterns were used. 2. Small set of trigger states are generated. Bigger sets produce more reliable results and increase probability that validation will pass successfully. This also increases calculation time. Smaller sets result in faster completion, but provides more false-positive reset candidates. In experiments, a set of 20 states were used. 3. Each input signal is matched against each one of trigger states and the new state is calculated. This is the state into which system transfers to from a previous state after an input pattern is applied. 4. For each input pattern, resulting states are analyzed and number of fixed triggers calculated. If this number equals total number of triggers, then a reset candidate is found. If not, next input pattern is used for calculations

53 5. If a better (initializes more triggers than other input patterns tried before) reset-candidate is found, it is saved as a best one. 6. A very large (compared to starting set) set of states are generated. In experiments, states were used. 7. Best reset-candidate is used with this large set of trigger sets to calculate new system states. (Steps 3 and 4). 8. If this input pattern validates an initializing pattern has been found. 5 Experimental results This algorithm was tested using ISCAS 89 circuits. Results show, that this algorithm (named PROTO) performs better or at least as-good-as 3 other algorithms provided by other researchers. This article s algorithm operates under increased difficulty, because internal structure of the chip is not known. Therefore, many techniques employed by other researchers are unavailable. Partial reset algorithm only provides partial results (for those circuits that did not require partial reset and full reset sequence was found). Result table 1 displays experimental results using proposed algorithm. First column contains names of ISCAS 89 test circuits, second number defines amounts of inputs, outputs and triggers. Both input and output numbers include number of triggers. Third column describes whether a full or partial reset sequence was found. If a full reset was found, a result string fixed state is used. This means, that it is possible to switch circuit from any random power-up state into a single fixed state using a set of resetting signals. If a full resetting signal is not found, number of fixed flip-flops / memory elements is displayed. This means, that experiment, using the algorithm provided could not find a full resetting sequence, but managed to set a part of triggers into fixed state. Results vary from 0% (s510) to 99.7% (s38584) of triggers set to fixed state after a found partial reset sequence is used. Circuit s38584 has 1426 triggers, and only 3 were left unset. Fourth column provides a number of fixed triggers using PROTO (proposed software prototype emulating system-on-chip algorithm) software. Fifth column displays the length of found reset sequence (full or partial). The search depth was limited to maximum set of 50 input signals. Some circuits may have a full reset sequence which is longer than 50 and show up only as partial resetting ones in this experiment. Other 5 columns are identical to fifth and sixth columns and provide results for algorithms used by other researchers (ACO-Init, GA-Init, Partial). A number next to each name refers to an article in the reference list describing each of algorithms. The - signs used in results table mean that these circuits were not tested by the other articles authors. Largest of circuits are very hungry for CPU-time to test and has plenty of triggers. Therefore, using increased input/states set sizes; depth of search may take longer, but provide better results. If a full reset is required, but only partial resetting sequence is available, it is possible to reset some of the triggers using this set, and reset the non-fixed ones using DFT or direct input. Table 1. Experimental results: reset sequence lengths and numbers of reset flip-flops using software prototypes. Circuit No No of Inputs/Outputs /Triggers Compared algorithms PROTO ACO-Init[6] GA-Init[7] Partial[8] Triggers Full reset or max. no of memory elements set INITs LEN INITs LEN INITs LEN LEN s fixed state s fixed state s (71%) s fixed state s fixed state s fixed state s (86%) s208 8 fixed state s27 3 fixed state s fixed state s fixed state s fixed state s fixed state s (33%) s (99,78%) s386 6 fixed state

54 Circuit No No of Inputs/Outputs /Triggers Compared algorithms PROTO ACO-Init[6] GA-Init[7] Partial[8] Triggers Full reset or max. no of memory elements set INITs LEN INITs LEN INITs LEN LEN s fixed state s fixed state s fixed state s (0%) s fixed state s (94%) s fixed state s fixed state s820 5 fixed state s832 5 fixed state s fixed state s (73%) s (86%) Conclusions I presented an algorithm for finding shortest length reset sequences using circuit emulating software prototypes. Experimental results show, that this algorithm performs better or as-good-as compared to other methods on ISCAS 89 test circuits. This algorithm operates under conditions of increased difficulty, as the internal structure is not known and can not be examined and used to reduce the problem scope. Increasing sizes of data sets might have provided better experimental results, but it increases time required for each run, which might be limited. The novelty and research value of the proposed method comes from using software that emulates circuits instead of using manufactured chips. Such method does not use logical structure of the chip itself and test generation may start earlier in the manufacturing process. References [1] Pomeranz I., Reddy S. M. On the Detection of Reset Faults in Synchronous Sequential Circuits. VLSI Design , pp [2] Cheng K., Agrawal V. Initializability consideration in sequential machine synthesis. IEEE Trans. Comput.1992, volume 41, pp [3] Wehbeh J. A., Saab D. G. On the Initializationof Sequential Circuits. Intl. Test Conf. 1994, pp [4] Pomeranz I., Reddy S. M. On Removing Redundancies from Synchronous Sequential Circuits with Synchronizing Sequences. IEEE Trans. Jan. 1996, pp [5] Keim M., Becker B. On the (Non-)Resetability of Synchronous Sequential Circuits. IEEE VLSI test symposium. 1996, pp [6] Xiaojing H., Zhengxiang S. Ant Colony Optimizations for Initalization of synchronous ssequential circuits, IEEE Circuits and Systems International Conf. 2009, pp [7] Corno F., Prinetto P. Initializability analysis of synchronous sequential circuits. ACM Trans. on Design Automation of Electronic Systems. 2002, vol. 7, no 2, pp [8] Lu Y., Pomeranz I. Synchronization of Large Sequential Circuits by Partial Reset*. IEEE VLSI Test Symp. 1996, pp [9] E.Bareiša, V.Jusas, K.Motiejūnas, R.Šeinauskas. Functional Delay Clock Fault Models. Information Technology And Control, Kaunas, Technologija, 2008, Vol. 37, No. 1, [10] E.Bareiša, V.Jusas, K.Motiejūnas, R.Šeinauskas. The Use of a Software Prototype for Verification Test Generation. Information Technology And Control, Kaunas, Technologija, 2008, Vol. 37, No. 4, [11] E.Bareiša, V.Jusas, K.Motiejūnas, R.Šeinauskas. On the Enrichment of Functional Delay Fault Tests. Information Technology And Control, Kaunas, Technologija, 2009, Vol. 38, No. 3,

55 GENETIC ALGORITHM MODELING APPROACH FOR MOBILE MALWARE EVOLUTION FORECASTING Vaidas Juzonis, Nikolaj Goranin, Antanas Cenys VilniusGediminas Technical University, Department of Information System, Sauletekio al. 11, SRL-I-415, LT-10223, Vilnius, Lithuania, Abstract. Mobile malware is a relatively new but constantly increasing threat to information security and modern means of communication. Mobile malware evolution speedup is highly expected due to the increase of the SmartPhone and other mobile device market and malware development shift from vandalism to economic aspect. Forecasting evolution tendencies is important for development of countermeasure techniques and prevention of malware epidemic outbreaks. Existing malware propagation models mainly concentrate on malware epidemic consequences modeling, i.e. forecasting the number of infected computers, simulating malware behavior or economic propagation aspects and are based only on current malware propagation strategies or oriented to other malware types. In this article we propose using the genetic algorithm modeling approach for mobile malware evolution forecasting. Genetic algorithm is selected as a modeling tool taking into consideration the efficiency of this method while solving optimization, modeling problems with large solution space and successful application for other malware type evolution forecasting. The model includes the genetic algorithm description, operating conditions, chromosome that describes mobile malware characteristic and the fitness function for propagation strategy evolution evaluation. Model was implemented and tested on the MATLAB platform. Keywords: Mobile, malware, genetic, algorithm, model, evolution, forecast. 1 Introduction Nowadays malware, i.e. software created with malicious purposes in order to harm the computer software or to be installed on computer without allowance of the legal user [21], is considered to be one of the major threats to information security, information systems and modern communication methods. The number of malware in the wild and rate of malware usage by e-criminals has the tendency to increase making protection against it a crucial task [33]. Significant shift in motivation for malicious activity has taken place over the past several years: from vandalism and recognition in the hacker community, to attacks and intrusions for financial gain. This shift has been marked by a growing sophistication in the tools and methods used to conduct attacks, thereby escalating the network security arms race [2]. Mobile malware is defined as viruses, worms, Trojans or other types that spread on the SmartPhones or other mobile devices running mobile OS. Although it is a relatively new malware type and not very common in the wild yet its portion is highly expected to increase with increase of the smart mobile device market. IDC [29] predicts that 1 billion mobile devices will go online by Protection against malware on mobile platforms is not very common, compared to traditional computer systems, making them especially attractive for e-criminals. Mobile devices can also provide a variety of services to e-criminals, the traditional systems cannot do: SMS-spam, MMS-spam, call-proxy, etc. Model is a physical, mathematical or logical representation of system entities, phenomena or processes [6]. Modeling allows forecasting the malware propagation consequences damage [34] and evolution trends [11], understand the behavior of malware, including spreading characteristics [10], understand the factors affecting the malware spread, determine the required effectiveness of countermeasures in order to control the spread and facilitate network designs that are resilient to malware attacks [26], predict the failures of the global network infrastructure [28] and many other tasks that cannot be investigated without harm to production systems in the wild. Existing malware propagation models mainly concentrate on malware epidemic consequences modeling, simulating malware behavior or economic propagation aspects and are oriented to traditional malware. In this article we propose using the genetic algorithm (GA) modeling approach for mobile malware propagation strategy evolution forecasting which may be also used as a framework for other characteristics evolution forecasting. Genetic algorithm [15] was selected as a modeling tool since it simulates natural selection by means of repeatedly evolving population of solutions and therefore may be used for predicting and modeling possible future propagation strategies. Genetic algorithm modeling has been proved to be effective in many areas such as business decision making, bioinformatics [3], [14], [31], information security [7], [11], [12], [13] and other. 2 Mobile Malware Evolution and Technical Analysis According to [17] the first mobile virus to appear was the Cabir virus which appeared on the 15th of June 2004, infected mobile phones running the Symbian OS and used Bluetooth wireless network as a propagation channel. After the successful infection the virus appended the telephone software with its code,

56 activated the Bluetooth and started searching for another Bluetooth device to forward the infected file. Since Bluetooth network coverage is limited to 10 meters the propagation rate of the first mobile virus was rather limited. The first Trojan malware ( Skulls ) also appeared in 2004, November [22],[24]. It infected NOKIA mobile phones, running Symbian operating system. Skulls propagated by pretending to be a software update, usually as Macromedia Flash update file with.sis extension. When the phone user activated the Trojan it changed the phone configuration settings and depicted the skulls on the screen. It also blocked many functions, such as SMS, MMS, calendar, camera, etc. The phone user could only perform telephone calls. The mobile Trojan evolution continued in A new Trojan Locknut.A was detected [16]. Also created for the Symbian platform it was particular in size. The patch.sis file that contained the infection was only 2KB size, making it the smallest known Trojan for mobile platform. The first mobile malware that started using propagation methods, other than Bluetooth, was the Commwarrior.A virus, also running on the Symbian platform [8],[32]. It was using much quicker propagation by MMS, since this method does not have limitations by distance, although Bluetooth was also supported. MMS message included text in English, which proposed the phone user a new game, update for antivirus software or similar. The message was sent to all contacts, found in the phone address book. In this case virus authors have relied on the social engineering since when the recipient receives the message from his friend or familiar person the probability of opening it is higher than when it comes from the unknown number. An interesting thing is that Bluetooth was activated during the working hours and MMS were sent in the evening and at night. After each successful infection the virus makes a one minute delay and after that starts searching for a new victim. In 2009 the Kaspersky Labs has discovered a new mobile malware named sms.python.flocker, written in Python language and designed to manipulate the mobile phone accounts. The main malware functionality is dedicated to financial gain. Virus sends SMS messages to the specific number, which allows transferring money from the account of the infected phone to the account of the malware author [17]. 3 Prior and related work Although it is widely accepted that malware evolution forecasting is an important information security task, the first model-based research paper on this topic appeared only in 2008 [11], which discussed the Internet worm evolution trends. In this article we proposed the general framework for Internet worm evolution (propagation strategy) forecasting. Propagation strategy was selected since it is one of the most descriptive malware characteristics. The Internet worm characteristics representation structure, fitness function and the experiment results were provided. It was shown, that GA may be used for malware characteristic evolution forecasting. The proposed model was tested on existing worms propagation strategies with known infection probabilities. The tests have proved the effectiveness of the model in evaluating propagation rates and have shown the tendencies of worm evolution. Rather similar concept was proposed almost one year later in [25]. Authors validate the notion of evolution in viruses on a well-known Bagle virus family. The results of the proofof-concept study showed that new viruses previously unknown of Bagle family have successfully evolved starting from a random population. This paper is more malware specific compared to our previous article [11] since its characteristic representation is created for the specific malware type (Bagle virus), code-dependent, mainly demonstrates evolution concept and is not specialized for evolution forecasting. Non-GA models mainly cover malware epidemic consequences modeling, i.e. forecasting the number of infected computers, simulating malware behavior or economic propagation aspects and are based only on current malware propagation strategies or oriented to other malware types. The first epidemiological model to computer virus propagation was proposed by [18]. Epidemiological models abstract from the individuals, and consider them units of a population. Each unit can only belong to a limited number of states. A SIR model assumes the Susceptible-Infected-Recovered state chain and SIS model the Susceptible-Infected-Susceptible chain. In a technical report [37] described a model of worm propagation. The authors model the Internet service as an undirected graph of relationship between people. In order to build a simulation of this graph, they assume that each node degree is distributed on a power-law probability function. Malware propagation in Gnutella type P2P networks was described in [26] by Ramachandran et al. An analytical model that emulates the mechanics of a decentralized Gnutella type of peer network was formulated and the study of malware spread on such networks was performed. The Random Constant Spread (RCS) model [30] was developed by Staniford et al. using empirical data derived from the outbreak of the CodeRed worm. The model assumes that a machine cannot be compromised multiple times and operates the constant average compromise rate K, which is dependant on worm processor speed, network bandwidth and location of the infected host, etc. The model can predict the number of infected hosts at time t if K is known. As [23] states, that although more complicated models can be derived, most network worms will follow this trend. Other authors [5] propose the AAWP discrete time model, in the hope to better capture the discrete time behavior of a worm. However, according to [28] continuous model is appropriate for large scale models. On the other hand Zanero et al in [28] propose a sophisticated compartment based model, which treats the Internet as the interconnection of autonomous systems, i.e. subnetworks. Interconnections are so-called bottlenecks. The model assumes, that

57 inside a single autonomous system the worm propagates unhindered, following the RCS model. The authors motivate the necessity of their model via the fact that the network bottlenecks may be flooded by malware. Zou et al in [34] propose a two-factor propagation model, which is more precise in modeling the satiation phase taking into consideration the human countermeasures and the decreased scan and infection rate due to the large amount of scan-traffic. The same authors have also published an article on modeling worm propagation under dynamic quarantine defense [36] and evaluated the effectiveness of several existing and perspective worm propagation strategies [35]. Lelarge in [19] introduces an economic approach to malware epidemic modeling (including botnets). Li et al. [20] model botnet-related cybercrimes as a result of profit-maximizing decision-making from the perspectives of both botnet masters and renters/attackers. From this economic model, they derive the effective rental size and the optimal botnet size. Fultz in [9] describes DDoS attacks organized with the help of botnets as economic security games. The increase of mobile device popularity has called out the appearance of models dedicated to the mobile malware modeling. Ruitenbeek et al. in [27] simulates virus propagation using parameterized stochastic models of a network of mobile phones, created with the help of Mobius tool and provides insight into the relative effectiveness of each response mechanism. Two models of the propagation of mobile phone viruses were designed to study the impact of viruses on the dependability and security of mobile phones: the first model quantifies the propagation of MMS viruses and the second - of Bluetooth viruses. Bulygin in [4] analysis two viruses using different propagation methods (MMS and Bluetooth) in SI (Susceptible->Infected) model. 4 Evolution forecasting model 4.1 General assumptions The model proposed in this article aims on mobile malware evolution tendencies forecasting and by that is different from other malware models that concentrate on epidemiologic or economic malware outbreak consequences modelling. Simulation environments serve many purposes, but they are only as good as their content [1]. While designing the model it is necessary to select main factors out of many and reject those that are not important or may cause result distortion. In case of GA modeling the main task consists of three parts: appropriate selection of chromosome structure, which represents the solution, definition of the fitness function and GA operating conditions, such as population size, mutation rates, parent selection, etc. The model proposed in this article is based on the model previously proposed in [11] with some modifications, adapting it for mobile malware evolution forecasting. Although the proposed model is adapted to propagation strategy evolution forecasting with some modifications (fitness function change) it can be used for other characteristic evolution forecasting. Here we define the propagation strategy as a combination of methods and techniques, used by malware to insure malware population increase. In the current study, we have chosen to model strategies for a theoretical mobile virus, which aims infecting the largest amount of mobile devices during a fixed relatively short period of time. 4.2 Experiment conditions GA consists of initialization, selection and evolution stages. During the initialization stage initial population of strategies is generated. Each strategy is represented as a chromosome. At selection stage strategies are selected through a fitness-based process and in case termination condition is not met evolutionary mechanisms are started. If termination condition is reached, algorithm execution is ended. If not evolutionary mechanisms are activated. Initial population is generated on a random basis, i.e. each individual, representing separate strategy is combined of random genes values. Population size N is equal to 50. Population size remains constant after each new generation. The algorithm would stop producing new generations in case the number of generations have reached 100. Fitness proportionate selection was used. Mutation operator is activated to each newly generated individual with a 0.05 probability. MATLAB platform was used for model implementation. 4.3 Strategy representation Each strategy is represented as a chromosome (Table 1), which is combined of genes, i.e. combination of techniques and methods. Genes are divided into AA (always active compulsory or activating gene) and AE (active if enabled by AA gene). Such division insures representation flexibility and fixed chromosome length

58 Table 1. Chromosome structure. Gene number / Name / Type / Description / Comments 1/TRANSF1/AA*/Defines the 1st supported propagation type/enables NR 2/TRANSF2/AA/Defines the 2nd supported propagation type/enables NR 3/TRANSF3/AA/Defines the 3rd supported propagation type/enables BT 4/TRANSF4/AA/Defines the 4th supported propagation type/enables 5/TRANSF5/AA/Defines the 5th supported propagation type/enables WIFI 6/NR/AE/Telephone number search or generation module/ Effective if SMS or MMS transfer methods. 7/BT/AE/Scanner module, that searches for mobile devices with Bluetooth support. 8/ /AE/ module sending 9/WIFI/AE/Scanner module, searches for mobile devices with WIFI support. Value range or sample values Gene number / Name / Type / Description / Comments MMS 10/OS_PLATF/AA/OS platform affected by malware SMS 11/TEL/AA/Telephone models, affected by malware Bluetooth Wi-Fi Address book; Accepted/ Dialed numbers; Random; Scan Address book; address DB. Scan 12,13,14/EN_EXPL_N/AA/EXPL_ N (N=1-3) activation gene 15,16,17/EXPL_N(N=1-3)/AE/Defines the exploit used for propagation 18/NR_TIME/AA/Defines the NR gene s activity hours 19/BT_TIME/AA/Defines gene s activity BT 20/WIFI_TIME/AA/Defines WIFI gene s activity 21/EXEC/AA/Defines additional malware functionality/activates EXEC_CHAN 22/EXEC_CHAN/AE/Defines malware update channel Value range or sample values Linux; WIN MOBILE; SYMBIAN; NOKIA, SAMSUNG, Apple, RIM; ON=ExploitRef / OFF Random exploit out of suitable exploit array Always; 10:00-20:00; 20:00-10:00 Always; 10:00-20:00; 20:00-10:00 Always; 10:00-20:00; 20:00-10:00 None; Manage; Update; Manage+Updat e ; WI-FI; web-update 4.4 Fitness function From [30] we can say that the propagation strategy efficiency can be evaluated by value K the number of computers the first malware individual in the wild can infect in a fixed time period. That means that the higher is K, the higher is the fitness of a propagations strategy. Our K calculations by fitness function (Eq.1.) are based on combined statistical and empirical evaluation of time expenditures of strategy s functionality and probabilistic evaluation of strategy s functionality efficiency. Probabilities and time consumption values for activation genes and genes that are not enabled are equal to 0 and may be excluded from calculations. ( 1 ( 1 p ( NR _ TIME) ) ( 1 p ( BT _ TIME) ) ( 1 p ) ( 1 p ( WIFI _ TIME) )) p10 p11 F ( S) = k 17 (1) 1 ( ) 1 p i i= 15 where: S evaluated strategy; p 6 -p 9 probability, that exploits will be successfully transferred to the target device (p 6, p 7 and p 9 are time dependant); p 10 probability, that the target device will run the supported OS; p 11 probability, that device hardware is compatible; p 15 -p 17 probabilities, that exploit will result in infection; k the number of cycles the virus, using the evaluated strategy, can perform in one second time interval (Eq.2). 1 k (2) = 22 t j j= 1 where t j are time expenditures needed for j th gene functionality. The fitness function can be read as: The evaluated strategy S can perform k cycles per second. During each cycle the virus, using this strategy, will infect a target host in case at least one of the transfer methods successfully transfers the exploits to the target, the target runs the supported OS on the supported platform and at least one of exploits result in target infection. Compared

59 to our previous model for Internet worms described in [11] limitations for probabilities size were removed. The correctness of fitness functions proposed was tested on historical data, by applying for fitness evaluation some malware samples with known fitness, observed experimentally. 4.5 Experiment results The best fitness result achieved during algorithm test was equal to F(S d )= Compared to fitness of a sample strategy F(S p )=0,017 of the current mobile malware (Transfer method MMS only; OS platform - Symbian; Telephone platform - NOKIA; activity hours Always; Numbers used Address book; one exploit) fitness of the predicted mobile virus has increased almost times. The fitness change during evolution of the best individual is shown on Fig.1., average population fitness change - Fig.2. It should be noticed, that general population fitness also increases in time and that the number of individuals with better strategies increase even though the best individual evolution stops after the 42 generation. Figure 1. Best strategy fitness change graph Figure 2. Average population fitness change graph Compared to the sample strategy the following functionality (genes) was enabled in the best strategy during evolution: Windows mobile support, Wi-Fi transfer method support. We can make an assumption that these methods were included since they provide rather high infection efficiency (additional popular OS and W-Fi with relatively high network coverage). Other potentially efficient methods were not included since their addedvalue to propagation efficiency was neglected by time consumption, other methods do not result in infection at all (additional functionality) or even minimize the propagation rate (e.g. limitation by hours). 5 Conclusions In this article the genetic algorithm modeling approach for mobile malware evolution forecasting was proposed. This is an absolutely new modeling approach for this malware type since it forecasts mobile malware evolution trends compared to traditional models that concentrate on epidemic consequences modeling. Model tests were performed for the mobile malware propagation strategy forecasting. The proposed model included the Genetic algorithm description, operating conditions, chromosome that describes mobile malware characteristics and the fitness function for propagation strategy evolution evaluation. Model was implemented and tested on the MATLAB platform. The model test results have shown that in case malware creators will intend to optimize the propagation strategy mobile malware evolution will tend to inclusion of additional OS platform and propagation by Wi-Fi networks. The forecasted propagation strategy tends not to be function overloaded due to time consumption increase. The main model application area is countermeasures planning, since the model predicts the propagation strategy trends. The current study shows that special attention should be paid to wireless security on mobile devices. The model can be also used as a framework (fitness function modification would be needed) for evolution modeling of other mobile malware parameters, such as stealth, functionality or their complexes. References [1] Banks S.B., Stytz M.R. Challenges Of Modeling BotNets For Military And Security. Proceeding of SimTecT [2] Barford P., Yegneswaran V. An Inside Look at Botnets. Advances in Information Security, Springer US. 2007, volume 27, [3] Birchenhall C., Kastrinos N., Metcalfe S. Genetic algorithms in evolutionary modeling. Journal of Evolutionary Economics. 1997, volume 7, [4] Bulygin Y. Epidemics of Mobile Worms. Performance, Computing, and Communications Conference, IPCCC 2007, IEEE International. 2007,

60 [5] Chen Z., Gao L., Kwiat K. Modeling the Spread of Active Worms. Proceedings of NFOCOM Twenty-Second Annual Joint Conference of the IEEE Computer and Communications, IEEE Societies.2003, volume 3, [6] Defense Acquisition University. Systems Engineering Fundamentals: January Defense Acquisition University Press [7] Faraoun K.M., Boukelif A. Genetic Programming Approach for Multi-Category Pattern Classification Applied to Network Intrusions Detection. International Journal of Computational Intelligence. 2007, volume 3(1), [8] F-Secure. Worm:SymbOS/Commwarrior. F-Secure Corporation, Interactive: [9] Fultz N. Distributed attacks as security games. Master thesis, US Berkley School of Information [10] Garetto M.W., Towsley G. D. Modeling Malware Spreading Dynamics. Proceedings of INFOCOM [11] Goranin N., Cenys A. Genetic Algorithm Based Internet Worm Propagation Strategy Modeling. Information Technology And Control. 2008, volume 37, [12] Goranin N., Cenys A. Genetic algorithm based Internet worm propagation strategy modeling under pressure of countermeasures. Journal of Engineering Science and Technology Review.2009, volume 2, [13] Goranin N., Cenys A. Malware Propagation Modeling by the Means of Genetic Algorithms. Electronics and Electrical Engineering. 2008, volume 86, [14] Hill R.R., McIntyre G.A., Narayanan S. Genetic Algorithms for Model Optimization. Proceedings of Simulation Technology and Training Conference (SimTechT) [15] Holland J. Adoption in natural and artificial systems. The MIT press [16] Jarno U. Disinfection tool for SymbOS/Locknut.A (Gavno.A and Gavno.B). F-Secure Corporation, Interactive: [17] Kaspersky Lab. Kaspersky Lab reports. Interactive: [18] Kephart J.O., White S.R. Directed-graph epidemiological models of computer viruses. Proceedings of IEEE Computer Society Symposium. 1991, [19] Lelarge M. Economics of Malware: Epidemic Risks Model, Network Externalities and Incentives. Proceedings of Fifth biannual Conference on The Economics of the Software and Internet Industries [20] Li Z., Liao Q., Striegel A. BotnetEconomics: Uncertainty Matters. Managing Information Risk and the Economics of Security, Springer US. 2009, [21] Monga R. MASFMMS: Multi Agent Systems Framework for Malware Modeling and Simulation. Lecture Notes in Computer Science, Springer Berlin / Heidelberg. 2009, volume 5269/2009, [22] Naraine R. Cell Phone Security: New Skulls Mutant Comes with Virus Extras, Interactive: [23] Nazario J. Defense and Detection Strategies against Internet Worms. Artech House Publishers [24] Niemela J. F-Secure Virus Descriptions : Skulls.D. F-Secure Corporation, Interactive: [25] Noreen S., Murtaza S., Shafiq M.Z., Farooq M. Evolvable malware. GECCO '09: Proceedings of the 11th Annual conference on Genetic and evolutionary computation, ACM. 2009, [26] Ramachandran K., Sikdar B. Modeling malware propagation in Gnutella type peer-to-peer networks. Proceedings of the Parallel and Distributed Processing Symposium, IPDPS. 2006, volume 20, 8 pp. [27] Ruitenbeek E.V., Courtney T., Sanders W.H., Stevens F. Quantifying the Effectiveness of Mobile Phone Virus Response Mechanisms. IEEE/IFIP International Conference on Dependable Systems and Networks.2007, [28] Serazzi G., Zanero S. Computer Virus Propagation Models. Lecture Notes in Computer Science, Springer-Verlag. 2004, [29] Shah A. IDC: 1 Billion Mobile Devices Will Go Online by IDG News Service, Interactive: [30] Staniford S., Paxson V., Weaver N. How to 0wn the Internet in Your Spare Time. Proceedings of the 11th USENIX Security Symposium, USENIX Association. 2002, [31] Stender J., Hillebrand E., Kingdon J. Genetic Algorithms in Optimization, Simulation and modeling. IOS Press [32] Sundgot J. First Symbian OS virus to replicate over MMS appears Interactive: [33] Turner D. Symantec Global Internet Security Threat Report. Symantec Corporation [34] Zou C.C., Gong W., Towsley D. Code Red Worm Propagation Modeling and Analysis. CCS '02: Proceedings of the 9th ACM Conference on Computer and communications security, ACM. 2002, [35] Zou C.C., Gong W., Towsley D. On the performance of Internet worm scanning strategies // Performance Evaluation, Elsevier Science Publishers B. V. 2005, volume 63, [36] Zou C.C., Gong W., Towsley D. Worm Propagation Modeling and Analysis under Dynamic Quarantine Defense. WORM '03: Proceedings of the 2003 ACM workshop on Rapid malcode, ACM. 2003, [37] Zou C.C., Towsley D., Gong W. Virus Propagation Modeling and Analysis. Technical report TRCSE-03-04, University of Massachusetts

61 BRINGING MODELS INTO PRACTICE: DESIGN AND USAGE OF UML PROFILES AND OCL QUERIES IN A SHOWCASE Joanna Chimiak Opoka 1, Berthold Agreiter 1,2, Ruth Breu 1 1 University of Innsbruck, Institute of Computer Science, ICT Building, Technikerstrasse 21a,6020 Innsbruck, Austria, joanna.opoka@uibk.ac.at, berthold.agreiter@uibk.ac.at, ruth.breu@uibk.ac.at 2 arctis Softwaretechnologie GmbH, Jaegerweg 2, A-6401 Inzing, Austria Abstract The introduction of systematic modelling practices in an enterprise is a demanding task. Mainly, the challenges are related to ensuring a sustaining modelling culture, especially in smaller IT departments. In this paper, we analyse experiences from a modelling project in an industrial setting. The major goal was to improve the documentation quality of the existing, widely informal process model and to establish a commonly accepted modelling culture. During the project, a UML profile was iteratively developed and applied to a model. Furthermore, OCL has been used for automatised quality assessment by model querying. Major benefits, observed by the industry partner, were improved knowledge sharing among the project participants supported by an intuitive modelling notation and automatic information retrieval from the model. Moreover, we describe our adaptations of applied methodologies and quality improvements achieved in the project. Keywords: experience story, UML profile, domain specific modelling (DSML), model querying with OCL, model quality, DSL 1 Introduction This paper presents a field report of modelling the internal business processes and data flows in a company. The objective of the project was to document a software assisted business process of a major retail store chain and further to investigate the quality of the process and its model in an automated way. Additionally, we wanted to investigate the applicability of the applied methods and tools in an industrial context. The primary goal of the project was to document the business process and information flow in a clearly defined and unambiguous way, with readily available tools and within short time to make this knowledge accessible to all developers within the company. A subsequent goal was to improve the understanding of complex processes by engineers and to be able to spot inconsistencies or weaknesses. To meet these requirements we decided to use an existing UML tool and make use of the UML profiling mechanism. As a consequence of this solution, costs were kept low as no implementation work was required, and the domain specific modelling language (defined as a UML profile) could be designed using a refinement approach instead of designing it from scratch. Additionally, the approach enabled easy adaptation of the visual representation of model elements, so that models are easier to understand for domain experts by the use of intuitive shapes of elements. Another aspect we explored in this project was automated quality assessment by querying the model using the Object Constraint Language (OCL).We systematically analysed quality improvements achieved by the modelling and querying methodology we introduced. Querying is especially interesting because of the fact that it can be used to assist the modeling process so that the created model maintains high quality. This contribution describes the development process of the UML profile and its application to seven use cases. Both, the development of the UML profile and the model were conducted iteratively and within close cooperation with domain experts. Furthermore, an analysis framework as a number of OCL queries is developed to assess the quality of the model. The objectives are (1) to describe iterative, expert-supported, agile adaptation of existing methods to develop UML profiles, (2) to share our experience of using this method in an industrial context, (3) to present a practical method for model analysis, and (4) to share our experience of using it to ensure technical correctness, adherence to conventions, and quality assessment of a model. To increase the clarity of the detailed project description in the following, we provide definitions of the most important concepts in Table 1. The structure of the paper is as follows: Section 2 describes the setting and motivates systematic modelling and model querying. Section 3 describes used methods, frameworks and tools and motivates their usage in the context of the project. Next, in Section 4 we demonstrate the development process of the model and queries within the project and describe obtained quality improvements. Finally, Section 5 concludes and gives an outlook on future work. 2 Project Context This project was initiated by a major retail store chain with over 4000 employees and 150 stores. The company is developing large parts of their inventory and warehouse management software on their own while

62 integrating third party solutions for some specific areas. Its IT landscape is constantly evolving, hence the organisation wanted to improve the documentation of the whole system and included processes. They were aware of the fact that a common modelling language, which can be unambiguously interpreted and used by all its developers, will further improve the usefulness of documentation. Status quo ante As certain parts of the system landscape have evolved significantly over the years, and their development was partitioned among several groups, it became constantly harder to overlook the whole system. Hence, to decide on the best strategy for adhering to new requirements became more challenging over time. Mainly because of the following two points, the company started capturing their business processes in models: Table 1. Definitions of basic concepts related to modelling and model analysis. CONCEPT metamodel model diagram quality model quality assurance quality assessment model query DEFINITION a model that defines the language for expressing a model [12]. In our context, the defined language is a Domain Specific Language (DSL). a simplification of something so we can view, manipulate, and reason about it, and so help us understand the complexity inherent in the subject under study [10]. A model is a description written in a well defined language (metamodel) [7]. a graphical presentation of a collection of model elements, most often rendered as a connected graph of arcs (relationships) and vertices (other model elements) [12]. is a framework defining and relating relevant quality aspects. a process for establishing stakeholder confidence that a model fulfills certain expectations. a general term that embraces all methods used to judge the quality. The judgement is based on a quality model. a means to retrieve information from a model to reason about the subject under study or to assess model quality. 1. Overview on all components: when developers had to modify a part of the system which could possibly affect subsystems developed by other people, they needed to crosscheck whether their modification would have any unintentional effects on these subsystems. 2. Finding suitable interfaces: when a new third party solution was about to be introduced in the company, it could be cumbersome to find out to which existing components this solution needs interfaces and how this data can be provided. Consequently, some first diagrams capturing information about interfaces among different applications were created by the company. The purpose of these diagrams was mainly documentation. As a result, the communication among developers should be organised in an efficient way without having to consult the source code of other components. Some developers started modeling which data is used by certain applications and how this data is passed on to other applications. Essentially, these diagrams were created with drawing tools and showed the flow of information in different processes, e.g. which applications are involved in a process, from which source data is read and where is it written to. After initiating this activity, the industry partner identified the following problems: The semantics of different model elements was not clearly defined. Some developers interpreted and modelled certain facts different than others. Sometimes developers felt that it was not possible to express a specific process with existing elements and introduced new shapes/model elements. In most cases the diagrams were created as the state of the art from the most current process. However, it could happen that the implementation of a process changed, but the corresponding model was left unchanged. Consequently, the model and the actual code implementing the process were not synchronised. Hence, on the one side a modelling language that is intuitive to every domain expert should be found. On the other side, the model created this way needs to provide a way of checking certain properties. Such checks can be either on the modelled process, like To which databases is application A connected?, but also about the quality of the model itself, like Are there any unnamed applications in the model?. Thus, the organisation contacted arctis Softwaretechnologie GmbH. Arctis was in charge of developing an appropriate modelling technique in this context and responsible for modelling the initial use cases together with staff from the IT department of the enterprise. Use Cases Important and large examples were selected by the industry partner with the expectation of covering as many different constructs as possible. The selected use cases were modelled in comparable level of detail. It was desired that the use cases are complex enough so that also submodels, i.e. a subprocess of the use case in a higher level of detail, need to be created. Furthermore, the industry partner selected use cases exhibiting

63 interactions with each other. This means for instance, that an application used in one use case produces data which is sent to applications of a different use case. The main purposes of modelling were documentation, improved communication among developers, and a centralised point of information. Every developer should be able to view and edit the model if changes become necessary. The processes should be described in a way so that, among other information, all involved software applications, the location where they run, the type and protocol they communicate with each other and the databases they use are displayed. To get a better understanding of how these examples look like, we briefly give a description of the Customer Order case. As the name suggests, this process covers orders by customers. In this case the term customer does not identify a person buying something in a shop but a subsidiary store ordering goods at the headquarters. This process involves different stakeholders: employees in a store and major customers. These stakeholders start the process by different means of ordering goods, like fax or other ways of invoking a software call at the server in the headquarters. The server runs several applications to process these orders and get or write data to the according database tables. In the case of the customer order process, there are several communication channels to other use cases. For example, after the order data is written to a database, it is fetched by a different process, which represents the Order Processing use case. Quality Assessment The objective of this project was not only to document software assisted business processes but also to analyse different quality aspects of the model in an automated way. We distinguish between two different aspects of quality assessment of the model. One aspect targets the domain specific quality, the other targets the linguistic quality of the model. Domain specific aspects perform information retrieval to increase understanding of the modelled process and thus provide input for process improvements. Linguistic aspects consider the quality of the model as artefact. These are mostly related to technical issues, such as the internal model representation by the modelling tool, or issues related to the modeling process, like covering user defined quality aspects, i.e. completeness, consistency, and adherence to conventions. Both these quality aspects need to be assessed so that the model does not contain unused elements or even contradicting information. For example, to assure that all communications between different use cases are modelled correctly the conditions shown in Figure 1 should be satisfied. The technical realisation for this check is described in sections 3.2 and 4.2. The selection of methods and tools for project realisation is described in the following section.. completeness of the definition of the inter-use case communication triples:. each incoming communication for a use case must have. a corresponding outgoing communication in a different use case and. a corresponding communication in the global view Figure 1. Informal description of a quality aspect checking communications between use cases. 3 Concepts, Methods and Tools In this section, we focus on methods and tools applied to the metamodel and model development (Section 3.1), development of model queries (Section 3.2), and the quality model used to discuss quality improvements (Section 3.3). 3.1 Model Development The requirement from the industry partner was to model the interplay of different applications and software components as a business process in a graphical and comprehensible manner. Unnecessary complexity should be avoided and model understandability is a key goal. Our industry partner already had numerous diagrams and a clear picture of what to capture. However, not all of the existing diagrams were created using the same technique as it was still in an evolving phase. Nevertheless, the existing diagrams extensively helped us to choose the modelling language. Different process modelling languages were considered. Among others the Business Process Modelling Notation (BPMN, [16]) and Event driven Process Chains (EPC, [11]). They offer very usable concepts, but the representation of data and datastores was not sophisticated enough. The industry partner wanted a clean separation between different types of communication and a structured representation of datastores and data. Furthermore, these diagrams were not customisable enough in the sense that custom shapes can be assigned to model elements. After reviewing the existing diagrams and taking the aforementioned needs into account, the choice fell on UML activity diagrams. They can be used for process modelling as they are similar to BPMN [15] and EPC [6]. However, UML activity diagrams by themselves were still lacking domain specificity and offered a modelling spectrum which was too wide and where the semantics of model elements was not immediately visually recognisable. We decided to use the UML profiling mechanism to address the adaptation of activity diagrams. For modeling business processes a standard UML profile was proposed [13]. However, for our project it was too generic, too large and would have needed further adaptation to recognise the language as an appropriate DSL. As a light weight, easy to learn and intuitive language was required by our project partner, we created our own UML profile tailored to the project requirements

64 We used the commercial UML modelling tool MagicDraw I which fulfilled all requirements for the modelling tool. To our partner it was important that an existing, user friendly, and stable tool is used to avoid development and maintenance costs and have a preferrably flat learning curve. An additional important point covered by MagicDraw was the possibility to highly customise diagrams, e.g. by adapting shapes and icons for different model elements or different line styles for data flows and control flows. To be able to analyse the model it was important to have a tool that strictly adheres to the existing standards such as UML and XMI. This allowed us to import the model to our analysis tool. Figure 2. The development process of the DSL and the model. For the development of the DSL we defined an iterative development method based on existing methods [1, 14]. We followed the approach described in [1] to design a DSL: identify fundamental language constructs, relationships, constraints, concrete syntax, and semantics. We combined this approach with a pragmatic method proposed in [14] supported by MagicDraw where DSL samples are created and the DSL environment is tested. Moreover, we defined an iterative process with alternating interviews with domain experts and modelling steps. The development process (Figure 2) consists of an initial step (upper swimlane) and several macro iterations (lower swimlane). In every phase, domain and modelling experts are cooperating. In the initial step domain concepts and relations are identified and documented after an initial interview with domain experts. Based on this information, an initial version of the UML profile and a sample diagram are created by the model designers. Those artefacts are the basis of discussion between domain experts and model designers on the expressiveness and understandability of the proposed DSL. The feedback from the domain experts is included in the next step, i.e. the modeling of the first use case. Each use case is modelled in one macro iteration. In each macro iteration, domain and modelling experts work together to assure high appropriateness of the model. First, domain experts prepare an informal description of the use case. Next, the model designers start micro iterations to model the use case and update the UML profile and customisations if necessary. Afterwards, domain experts evaluate and refine the model and profile, i.e. the accuracy of the model, its understandability, and intuitiveness of representation. The feedback is integrated in the current and following use cases. 3.2 Model Queries As indicated at the beginning of this section, queries can be used to reason about the subject under study or to assess model quality. Thus, we consider domain specific and linguistic aspects (compare Section 2). As we decided to use UML, the first choice query language was OCL. It was formally shown that OCL 2.0 is expressive enough to be used as a query language [2]. Another reason to select this language was our positive experience from previous projects [5, 4]. For developing and managing the collection of queries, we decided to use our library extension to OCL [3] and OCLEditor developed in our research group II. The library extension to OCL enables to collect and manage OCL expressions and models. Within libraries standard OCL definitions and our additional extensions for queries and tests are collected. Queries are expressions used to assess model quality or to retrieve specific information from a model. To increase semantic correctness of the expressions we use tests. The mechanism is similar to unit testing including the definition of test cases and test data. For the purpose of our project we defined a model analysis and library development process (see Figure 3). The upper swimlane corresponds to the manual model analysis, the lower swimlane to the library development process. First, a common requirement for model analysis and library development is specified. The quality aspect is selected, e.g. the definition of the inter use case communication triples (Figure 1). For this I II See for further information on the tool, the underlying theory, examples, and documentation

65 aspect OCL definitions and queries are specified in the development step. The next step is quality assessment, where the results of manual and automatic analysis are crosschecked. For the selected aspect, manual inspection is used to determine the result of this aspect for the model. Simultaneously, appropriate queries are evaluated on the model. If the results of the model inspection and the query evaluation differ, the reason has to be determined and either the OCL definition specification or manual inspection needs to be repeated. Figure 3. The model analysis and library development process. The manual inspection of the model is conducted as long as correctness of a query achieves a defined convenience level. Afterwards the query can be used for automatic model analysis. If the results are equal, the last step can be executed, i.e. quality assurance. The aim of this step is to assure semantic correctness of OCL expressions in the future development of the library. For this purpose tests are specified and evaluated regularly. In the test evaluation step, tests and test models are required to assess the desired semantics of definitions like test cases and test data are required to assess the semantic correctness of software. As a consequence, the model is used as test model and thus, must be freezed. 3.3 Quality Model As the size of our case study was too small to obtain statistically significant data, we decided to select a qualitative model to request and structure feedback from our project partner. We selected the SEmiotic QUALality (SEQUAL) framework [9, 8] providing a holistic view on model quality with seven quality dimensions in two layers (Figure 4). In the technical layer three quality dimensions are considered, i.e. physical, empirical, and syntactical quality; in the social layer the four remaining dimensions, i.e. semantics, pragmatics, social and organisational quality, are considered. Below we give brief definitions of the seven quality dimensions. The physical quality dimension relates the externalised model and participant knowledge. The externalised model is the set of all explicit or implicit statements in the model. In the case of this project there are informal and formal descriptions of the use cases. The participants are people involved in the development or usage of the model. Two aspects are considered in this quality dimension: what is modelled and how is it protected and shared. The first aspect is represented as the ratio of known statements represented in the externalised model to all known statements about the domain (externalisation). The second aspect is internalisation and includes model persistence and availability aspects. The empirical quality dimension considers the readability of a model, including its complexity and aesthetics. The syntactical quality dimension relates model externalisation and language extension. The language extension is the set of all statements possible to express in the language. In the presented case the language is the DSL. This dimension considers syntactical correctness with respect to the language. The semantical quality dimension relates model externalisation and the modelling domain. It covers two aspects: the first is validity which assures that all statements made in the model are regarded as correct and relevant to the domain; the second is completeness which assures that the model actually contains all correct and relevant statements about the domain. The pragmatic quality dimension relates model externalisation and interpretation by technical and social actors. It requires that the model has been understood by the targeted participants. The social quality dimension considers the degree of agreement among participants. Each participant has a subjective knowledge about the domain and a different mental view (model) of the domain. Thus, by reading an externalised model each participant may interpret it differently. This quality dimension considers the agreement with respect to different objects: knowledge, model, and interpretation. Two degrees of agreement are considered: relative agreement, where the various objects are consistent but may be incomplete, and absolute agreement where all objects are the same (equal)

66 The organisational quality dimension analyses if the model is fulfilling the goals of modelling in the first place. Figure 4. An overview of the SEQUAL framework [9, 8]. 4 Development and Analysis of Project Results To give a better insight on how the development of the various artefacts was conducted, we show some statistics and give examples. The size of the UML profile and of the model during the different iterations of the project is discussed first (Section 4.1). After that, we show how the size of OCL libraries developed over time and give an example for an OCL expression (Section 4.2). Finally, we discuss quality improvements discovered (Section 4.3). 4.1 ML Profile and Models Asmentioned earlier, development was conducted iteratively. In each iteration the UML profile evolved. The changes on stereotypes and tagged values during macro and micro iterations are illustrated in Figure 5 on the left side. After creating the initial UML profile (0.0 in Figure 5) we started modelling the first use case. During this expert interview only the name of one single stereotype changed. However, after the domain experts reviewed the model, it was discovered that a relatively large number of stereotypes and tagged values was missing. They were added in the next step. As expected, the first use case needed the largest number of micro iterations ( ). During the second macro iteration another set of model elements was added, some were renamed and one stereotype was considered obsolete (2.0). During the last macro iteration only one stereotype was introduced and after review the UML profile remained unchanged ( ). For the remaining macro iterations ( ) no changes were required. The resulting DSL contains 31 stereotypes, tagged values, and enumerations. Statistics about the size of the model are illustrated in the right part of Figure 5. The figure visualises the strong connection between Classes and CentralBufferNodes especially in the first phase, as both were used to represent databases. Class diagrams were used for modelling the different tables, whereas CentralBufferNodes are then representing these tables on activity diagrams. The look and feel of the diagrams was customised due to request of the industry partner. This customisation was considered very important because on the one hand different elements can be distinguished much easier, on the other hand appropriate icons for elements extensively increase the readability and understandability of complex diagrams (see Figure 6). Simple variation of colors for different stereotypes were not sufficient because it does not render a model more intuitive to understand, and because this information would largely get lost when diagrams are printed and photocopied. Figure 5. Statistics of changes on UML profile elements: stereotypes and tagged values (left) and for the model (right). Iteration numbers are denoted with the number of the macro iteration followed by a dot and the number of the micro iteration. The macro iteration 0.0 represents the initial step of the UML profile development (see Figure 2)

67 Figure 6. Selected icons for various stereotypes with intuitive and easy-to-distinguish symbols. Note that symbols for activities executed by internal or external users only slightly differ but the difference is easy to see. 4.2 CL Queries Defined OCL expressions were evaluated in the quality assessment and quality assurance phase (Figure 3). Therefore, the role of the model was twofold: as an object for model analysis and as test data for query development. Figure 8 shows a query implementing a check for correctness of modelling of communications (Figure 1). For every incoming communication object, it looks up whether there is a corresponding outgoing communication object on a different process. It furthermore searches for the corresponding communication object in the global diagram. The global diagram should contain all modelled processes and communications between them. Figure 8 on the right side shows a result for this query, where one can see that two tuples are returned, and for both of them all three elements (incomingcommunication, outgoingcommunication and globalcommunication) are available. This means that these two communication objects are correct. For the project we defined 47 queries and in total 134 OCL expressions of different type and with a different scope (Figure 7). Within the project we had in average 121% of definition usage in queries, i.e. some definitions where used more than once. In average we had a test coverage of 123%, i.e. for some definitions we had more than one test. Moreover, almost half (46%) of the expressions may be reused in another project, as they are not project specific and can also be applied to general purpose UML. Figure 7. The statistics for OCL library development: diversity of element types (definition, queries, tests) over the libraries (UML, UML activity diagram, and UML profile specific). Figure 8. Example artefacts for model queries: a definition retrieving which data is transferred among different processes (left) and a possible query evaluation result for a query using the definition (right). 4.3 Quality Improvements Below we discuss quality improvements according to the SEQUAL framework introduced in Section 3.3. For each quality dimension we explain the status before (pre) and after (post) modelling, as well as means used to improve this dimension. The physical quality (completeness ratio, persistence and availability) Pre: Weak externalisation and internalisation. Not all known statements were expressed in the diagrams, i.e. they contained only partial description of the business process and information flow because they did not always reflect the current status of a system. Moreover, diagrams were only partially available and without an organisation wide modelling environment. Post: Improved externalisation and internalisation. During each macro iteration a subsequent use case was modelled and all known and relevant statements were externalised in the model. Moreover, the model was electronically stored and available to all participants. Additionally, internalisation increased, especially availability of information stored in the model by means of model queries. Queries allow access to different views on the model from the domain perspective, e.g. create a list of all applications producing documents. Means: Systematic modelling with a UML profile in MagicDraw and model querying with OCL

MODEL-DRIVEN QUANTITATIVE PERFORMANCE ANALYSIS OF UPDM-BASED ENTERPRISE ARCHITECTURE

MODEL-DRIVEN QUANTITATIVE PERFORMANCE ANALYSIS OF UPDM-BASED ENTERPRISE ARCHITECTURE MODEL-DRIVEN QUANTITATIVE PERFORMANCE ANALYSIS OF UPDM-BASED ENTERPRISE ARCHITECTURE Aurelijus Morkevičius 1, Saulius Gudas 1, 3, Darius Šilingas 2 1 Kaunas University of Technology, Faculty of Informatics,

More information

SysML and UML Models Usage in Knowledge Based MDA Process

SysML and UML Models Usage in Knowledge Based MDA Process http://dx.doi.org/10.5755/j01.eee.21.2.5629 ELEKTRONIKA IR ELEKTROTECHNIKA, ISSN 1392-1215, VOL. 21, NO. 2, 2015 SysML and UML Models Usage in Knowledge Based MDA Process Audrius Lopata 1, Martas Ambraziunas

More information

3rd Lecture Languages for information modeling

3rd Lecture Languages for information modeling 3rd Lecture Languages for information modeling Agenda Languages for information modeling UML UML basic concepts Modeling by UML diagrams CASE tools: concepts, features and objectives CASE toolset architecture

More information

MDA. SOA = Model Driven SOA

MDA. SOA = Model Driven SOA Introducing Model Driven SOA MDA + SOA = Model Driven SOA SoaML an Emerging Standard for SOA Modeling Dr. Darius Silingas Principal Trainer/Consultant darius.silingas@nomagic.com Introduction Who Am I?

More information

AUTOMATED GUI TESTING OF SOFTWARE APPLICATIONS USING UML MODELS

AUTOMATED GUI TESTING OF SOFTWARE APPLICATIONS USING UML MODELS AUTOMATED GUI TESTING OF SOFTWARE APPLICATIONS USING UML MODELS Robertas Jasaitis, Dominykas Barisas, Eduardas Bareisa Kaunas University of Technology, Department of Software Engineering Studentu st. 50,

More information

Automation of Semantic Web based Digital Library using Unified Modeling Language Minal Bhise 1 1

Automation of Semantic Web based Digital Library using Unified Modeling Language Minal Bhise 1 1 Automation of Semantic Web based Digital Library using Unified Modeling Language Minal Bhise 1 1 Dhirubhai Ambani Institute for Information and Communication Technology, Gandhinagar, Gujarat, India Email:

More information

Executive Summary. Round Trip Engineering of Space Systems. Change Log. Executive Summary. Visas

Executive Summary. Round Trip Engineering of Space Systems. Change Log. Executive Summary. Visas Reference: egos-stu-rts-rp-1002 Page 1/7 Authors: Andrey Sadovykh (SOFTEAM) Contributors: Tom Ritter, Andreas Hoffmann, Jürgen Großmann (FHG), Alexander Vankov, Oleg Estekhin (GTI6) Visas Surname - Name

More information

A Role-based Use Case Model for Remote Data Acquisition Systems *

A Role-based Use Case Model for Remote Data Acquisition Systems * A Role-based Use Case Model for Remote Acquisition Systems * Txomin Nieva, Alain Wegmann Institute for computer Communications and Applications (ICA), Communication Systems Department (DSC), Swiss Federal

More information

Enterprise Architect Training Courses

Enterprise Architect Training Courses On-site training from as little as 135 per delegate per day! Enterprise Architect Training Courses Tassc trainers are expert practitioners in Enterprise Architect with over 10 years experience in object

More information

The Specifications Exchange Service of an RM-ODP Framework

The Specifications Exchange Service of an RM-ODP Framework The Specifications Exchange Service of an RM-ODP Framework X. Blanc (*+), M-P. Gervais(*), J. Le Delliou(+) (*)Laboratoire d'informatique de Paris 6-8 rue du Capitaine Scott F75015 PARIS (+)EDF Research

More information

Model driven Engineering & Model driven Architecture

Model driven Engineering & Model driven Architecture Model driven Engineering & Model driven Architecture Prof. Dr. Mark van den Brand Software Engineering and Technology Faculteit Wiskunde en Informatica Technische Universiteit Eindhoven Model driven software

More information

MDA Driven xuml Plug-in for JAVA

MDA Driven xuml Plug-in for JAVA 2012 International Conference on Information and Network Technology (ICINT 2012) IPCSIT vol. 37 (2012) (2012) IACSIT Press, Singapore MDA Driven xuml Plug-in for JAVA A.M.Magar 1, S.S.Kulkarni 1, Pooja

More information

Introduction to Dependable Systems: Meta-modeling and modeldriven

Introduction to Dependable Systems: Meta-modeling and modeldriven Introduction to Dependable Systems: Meta-modeling and modeldriven development http://d3s.mff.cuni.cz CHARLES UNIVERSITY IN PRAGUE faculty of mathematics and physics 3 Software development Automated software

More information

Model-based System Engineering for Fault Tree Generation and Analysis

Model-based System Engineering for Fault Tree Generation and Analysis Model-based System Engineering for Fault Tree Generation and Analysis Nataliya Yakymets, Hadi Jaber, Agnes Lanusse CEA Saclay Nano-INNOV, Institut CARNOT CEA LIST, DILS, 91 191 Gif sur Yvette CEDEX, Saclay,

More information

Capturing and Formalizing SAF Availability Management Framework Configuration Requirements

Capturing and Formalizing SAF Availability Management Framework Configuration Requirements Capturing and Formalizing SAF Availability Management Framework Configuration Requirements A. Gherbi, P. Salehi, F. Khendek and A. Hamou-Lhadj Electrical and Computer Engineering, Concordia University,

More information

SCOS-2000 Technical Note

SCOS-2000 Technical Note SCOS-2000 Technical Note MDA Study Prototyping Technical Note Document Reference: Document Status: Issue 1.0 Prepared By: Eugenio Zanatta MDA Study Prototyping Page: 2 Action Name Date Signature Prepared

More information

Developing Web-Based Applications Using Model Driven Architecture and Domain Specific Languages

Developing Web-Based Applications Using Model Driven Architecture and Domain Specific Languages Proceedings of the 8 th International Conference on Applied Informatics Eger, Hungary, January 27 30, 2010. Vol. 2. pp. 287 293. Developing Web-Based Applications Using Model Driven Architecture and Domain

More information

Spemmet - A Tool for Modeling Software Processes with SPEM

Spemmet - A Tool for Modeling Software Processes with SPEM Spemmet - A Tool for Modeling Software Processes with SPEM Tuomas Mäkilä tuomas.makila@it.utu.fi Antero Järvi antero.jarvi@it.utu.fi Abstract: The software development process has many unique attributes

More information

Developing Software Applications Using Middleware Infrastructure: Role Based and Coordination Component Framework Approach

Developing Software Applications Using Middleware Infrastructure: Role Based and Coordination Component Framework Approach Developing Software Applications Using Middleware Infrastructure: Role Based and Coordination Component Framework Approach Ninat Wanapan and Somnuk Keretho Department of Computer Engineering, Kasetsart

More information

Rich Hilliard 20 February 2011

Rich Hilliard 20 February 2011 Metamodels in 42010 Executive summary: The purpose of this note is to investigate the use of metamodels in IEEE 1471 ISO/IEC 42010. In the present draft, metamodels serve two roles: (1) to describe the

More information

LOGICAL OPERATOR USAGE IN STRUCTURAL MODELLING

LOGICAL OPERATOR USAGE IN STRUCTURAL MODELLING LOGICAL OPERATOR USAGE IN STRUCTURAL MODELLING Ieva Zeltmate (a) (a) Riga Technical University, Faculty of Computer Science and Information Technology Department of System Theory and Design ieva.zeltmate@gmail.com

More information

BLU AGE 2009 Edition Agile Model Transformation

BLU AGE 2009 Edition Agile Model Transformation BLU AGE 2009 Edition Agile Model Transformation Model Driven Modernization for Legacy Systems 1 2009 NETFECTIVE TECHNOLOGY -ne peut être copiésans BLU AGE Agile Model Transformation Agenda Model transformation

More information

The Relationships between Domain Specific and General- Purpose Languages

The Relationships between Domain Specific and General- Purpose Languages The Relationships between Domain Specific and General- Purpose Languages Oded Kramer and Arnon Sturm Department of Information Systems Engineering, Ben-Gurion University of the Negev Beer-Sheva, Israel

More information

OCL Support in MOF Repositories

OCL Support in MOF Repositories OCL Support in MOF Repositories Joachim Hoessler, Michael Soden Department of Computer Science Technical University Berlin hoessler@cs.tu-berlin.de, soden@cs.tu-berlin.de Abstract From metamodels that

More information

Semantics-Based Integration of Embedded Systems Models

Semantics-Based Integration of Embedded Systems Models Semantics-Based Integration of Embedded Systems Models Project András Balogh, OptixWare Research & Development Ltd. n 100021 Outline Embedded systems overview Overview of the GENESYS-INDEXYS approach Current

More information

Fundamentals to Creating Architectures using ISO/IEC/IEEE Standards

Fundamentals to Creating Architectures using ISO/IEC/IEEE Standards Fundamentals to Creating Architectures using ISO/IEC/IEEE Standards What to Architect? How to Architect? IEEE Goals and Objectives Chartered by IEEE Software Engineering Standards Committee to: Define

More information

Event Metamodel and Profile (EMP) Proposed RFP Updated Sept, 2007

Event Metamodel and Profile (EMP) Proposed RFP Updated Sept, 2007 Event Metamodel and Profile (EMP) Proposed RFP Updated Sept, 2007 Robert Covington, CTO 8425 woodfield crossing boulevard suite 345 indianapolis in 46240 317.252.2636 Motivation for this proposed RFP 1.

More information

Train control language teaching computers interlocking

Train control language teaching computers interlocking Computers in Railways XI 651 Train control language teaching computers interlocking J. Endresen 1, E. Carlson 1, T. Moen 1, K. J. Alme 1, Ø. Haugen 2, G. K. Olsen 2 & A. Svendsen 2 1 ABB, Bergensveien

More information

!MDA$based*Teaching*and* Research*in*Software*Engineering*!

!MDA$based*Teaching*and* Research*in*Software*Engineering*! Plan!MDA$based*Teaching*and* Research*in*Software*Engineering*! Ludwik!Kuźniarz! Blekinge*Institute*of*Technology* School*of*Computing* Sweden*! Myself! Driven Architecture! MDA based Reaserch! Sample

More information

Key Properties for Comparing Modeling Languages and Tools: Usability, Completeness and Scalability

Key Properties for Comparing Modeling Languages and Tools: Usability, Completeness and Scalability Key Properties for Comparing Modeling Languages and Tools: Usability, Completeness and Scalability Timothy C. Lethbridge Department of Electrical Engineering and Computer Science, University of Ottawa

More information

Practical Model-Driven Development with the IBM Software Development Platform

Practical Model-Driven Development with the IBM Software Development Platform IBM Software Group Practical Model-Driven Development with the IBM Software Development Platform Osmond Ng (ong@hk1.ibm.com) Technical Consultant, IBM HK SWG 2005 IBM Corporation Overview The Challenges

More information

Sequence Diagram Generation with Model Transformation Technology

Sequence Diagram Generation with Model Transformation Technology , March 12-14, 2014, Hong Kong Sequence Diagram Generation with Model Transformation Technology Photchana Sawprakhon, Yachai Limpiyakorn Abstract Creating Sequence diagrams with UML tools can be incomplete,

More information

Transformation of the system sequence diagram to an interface navigation diagram

Transformation of the system sequence diagram to an interface navigation diagram Transformation of the system sequence diagram to an interface navigation diagram William Germain DIMBISOA PhD Student Laboratory of Computer Science and Mathematics Applied to Development (LIMAD), University

More information

Definition of Information Systems

Definition of Information Systems Information Systems Modeling To provide a foundation for the discussions throughout this book, this chapter begins by defining what is actually meant by the term information system. The focus is on model-driven

More information

MERGING BUSINESS VOCABULARIES AND RULES

MERGING BUSINESS VOCABULARIES AND RULES MERGING BUSINESS VOCABULARIES AND RULES Edvinas Sinkevicius Departament of Information Systems Centre of Information System Design Technologies, Kaunas University of Lina Nemuraite Departament of Information

More information

Integrity 10. Curriculum Guide

Integrity 10. Curriculum Guide Integrity 10 Curriculum Guide Live Classroom Curriculum Guide Integrity 10 Workflows and Documents Administration Training Integrity 10 SCM Administration Training Integrity 10 SCM Basic User Training

More information

Open Work of Two-Hemisphere Model Transformation Definition into UML Class Diagram in the Context of MDA

Open Work of Two-Hemisphere Model Transformation Definition into UML Class Diagram in the Context of MDA Open Work of Two-Hemisphere Model Transformation Definition into UML Class Diagram in the Context of MDA Oksana Nikiforova and Natalja Pavlova Department of Applied Computer Science, Riga Technical University,

More information

2 nd UML 2 Semantics Symposium: Formal Semantics for UML

2 nd UML 2 Semantics Symposium: Formal Semantics for UML 2 nd UML 2 Semantics Symposium: Formal Semantics for UML Manfred Broy 1, Michelle L. Crane 2, Juergen Dingel 2, Alan Hartman 3, Bernhard Rumpe 4, and Bran Selic 5 1 Technische Universität München, Germany

More information

OMG Specifications for Enterprise Interoperability

OMG Specifications for Enterprise Interoperability OMG Specifications for Enterprise Interoperability Brian Elvesæter* Arne-Jørgen Berre* *SINTEF ICT, P. O. Box 124 Blindern, N-0314 Oslo, Norway brian.elvesater@sintef.no arne.j.berre@sintef.no ABSTRACT:

More information

Computation Independent Model (CIM): Platform Independent Model (PIM): Platform Specific Model (PSM): Implementation Specific Model (ISM):

Computation Independent Model (CIM): Platform Independent Model (PIM): Platform Specific Model (PSM): Implementation Specific Model (ISM): viii Preface The software industry has evolved to tackle new approaches aligned with the Internet, object-orientation, distributed components and new platforms. However, the majority of the large information

More information

ITSS Model Curriculum. - To get level 3 -

ITSS Model Curriculum. - To get level 3 - ITSS Model Curriculum - To get level 3 - (Corresponding with ITSS V3) IT Skill Standards Center IT Human Resources Development Headquarters Information-Technology Promotion Agency (IPA), JAPAN Company

More information

INF5120 and INF9120 Modelbased System development

INF5120 and INF9120 Modelbased System development INF5120 and INF9120 Modelbased System development Lecture 5: 13.02.2016 Arne-Jørgen Berre arneb@ifi.uio.no and Arne.J.Berre@sintef.no Telecom and Informatics 1 Course parts (16 lectures) - 2017 January

More information

Unified Modeling Language (UML)

Unified Modeling Language (UML) Unified Modeling Language (UML) Troy Mockenhaupt Chi-Hang ( Alex) Lin Pejman ( PJ ) Yedidsion Overview Definition History Behavior Diagrams Interaction Diagrams Structural Diagrams Tools Effect on Software

More information

On the link between Architectural Description Models and Modelica Analyses Models

On the link between Architectural Description Models and Modelica Analyses Models On the link between Architectural Description Models and Modelica Analyses Models Damien Chapon Guillaume Bouchez Airbus France 316 Route de Bayonne 31060 Toulouse {damien.chapon,guillaume.bouchez}@airbus.com

More information

Semantic Web Domain Knowledge Representation Using Software Engineering Modeling Technique

Semantic Web Domain Knowledge Representation Using Software Engineering Modeling Technique Semantic Web Domain Knowledge Representation Using Software Engineering Modeling Technique Minal Bhise DAIICT, Gandhinagar, Gujarat, India 382007 minal_bhise@daiict.ac.in Abstract. The semantic web offers

More information

MDA and Integration of Legacy Systems: An Industrial Case Study

MDA and Integration of Legacy Systems: An Industrial Case Study MDA and Integration of Legacy Systems: An Industrial Case Study Parastoo Mohagheghi 1, Jan Pettersen Nytun 2, Selo 2, Warsun Najib 2 1 Ericson Norway-Grimstad, Postuttak, N-4898, Grimstad, Norway 1 Department

More information

Modelling in Enterprise Architecture. MSc Business Information Systems

Modelling in Enterprise Architecture. MSc Business Information Systems Modelling in Enterprise Architecture MSc Business Information Systems Models and Modelling Modelling Describing and Representing all relevant aspects of a domain in a defined language. Result of modelling

More information

RAPTOR: A VISUAL PROGRAMMING ENVIRONMENT FOR TEACHING OBJECT-ORIENTED PROGRAMMING *

RAPTOR: A VISUAL PROGRAMMING ENVIRONMENT FOR TEACHING OBJECT-ORIENTED PROGRAMMING * RAPTOR: A VISUAL PROGRAMMING ENVIRONMENT FOR TEACHING OBJECT-ORIENTED PROGRAMMING * Martin C. Carlisle Department of Computer Science United States Air Force Academy carlislem@acm.org ABSTRACT Learning

More information

Unit 1 Introduction to Software Engineering

Unit 1 Introduction to Software Engineering Unit 1 Introduction to Software Engineering João M. Fernandes Universidade do Minho Portugal Contents 1. Software Engineering 2. Software Requirements 3. Software Design 2/50 Software Engineering Engineering

More information

Modeling Systems Using Design Patterns

Modeling Systems Using Design Patterns Modeling Systems Using Design Patterns Jaroslav JAKUBÍK Slovak University of Technology Faculty of Informatics and Information Technologies Ilkovičova 3, 842 16 Bratislava, Slovakia jakubik@fiit.stuba.sk

More information

developer.* The Independent Magazine for Software Professionals

developer.* The Independent Magazine for Software Professionals developer.* The Independent Magazine for Software Professionals Improving Developer Productivity With Domain-Specific Modeling Languages by Steven Kelly, PhD According to Software Productivity Research,

More information

CHAPTER 1. Topic: UML Overview. CHAPTER 1: Topic 1. Topic: UML Overview

CHAPTER 1. Topic: UML Overview. CHAPTER 1: Topic 1. Topic: UML Overview CHAPTER 1 Topic: UML Overview After studying this Chapter, students should be able to: Describe the goals of UML. Analyze the History of UML. Evaluate the use of UML in an area of interest. CHAPTER 1:

More information

Object Oriented Programming

Object Oriented Programming Unit 19: Object Oriented Unit code: K/601/1295 QCF Level 4: BTEC Higher National Credit value: 15 Aim To provide learners with an understanding of the principles of object oriented programming as an underpinning

More information

Towards Traceability Metamodel for Business Process Modeling Notation

Towards Traceability Metamodel for Business Process Modeling Notation Towards Traceability Metamodel for Business Process Modeling Notation Saulius Pavalkis 1,2, Lina Nemuraite 1, and Edita Milevičienė 2 1 Kaunas University of Technology, Department of Information Systems,

More information

A Domain-Specific Language for Modeling Web User Interactions with a Model Driven Approach

A Domain-Specific Language for Modeling Web User Interactions with a Model Driven Approach A Domain-Specific Language for Modeling Web User Interactions with a Model Driven Approach Carlos Eugênio Palma da Purificação / Paulo Caetano da Silva Salvador University (UNIFACS) Salvador, Brazil email:

More information

Architecture Viewpoint Template for ISO/IEC/IEEE 42010

Architecture Viewpoint Template for ISO/IEC/IEEE 42010 Architecture Viewpoint Template for ISO/IEC/IEEE 42010 Rich Hilliard r.hilliard@computer.org VERSION 2.1b Abstract This is a template for specifying architecture viewpoints in accordance with ISO/IEC/IEEE

More information

Model Driven Ontology: A New Methodology for Ontology Development

Model Driven Ontology: A New Methodology for Ontology Development Model Driven Ontology: A New Methodology for Ontology Development Mohamed Keshk Sally Chambless Raytheon Company Largo, Florida Mohamed.Keshk@raytheon.com Sally.Chambless@raytheon.com Abstract Semantic

More information

SysML, It s Coming Are You Prepared?

SysML, It s Coming Are You Prepared? SysML, It s Coming Are You Prepared? Presentation for George Mason University Shana L. Lloyd The Aerospace Corporation 703-324-8877 Shana.l.lloyd@aero.org January 31, 07 1 Outline Introduction SysML Background

More information

Comparative analyses for the performance of Rational Rose and Visio in software engineering teaching

Comparative analyses for the performance of Rational Rose and Visio in software engineering teaching Journal of Physics: Conference Series PAPER OPEN ACCESS Comparative analyses for the performance of Rational Rose and Visio in software engineering teaching To cite this article: Zhaojun Yu and Zhan Xiong

More information

QoS-aware model-driven SOA using SoaML

QoS-aware model-driven SOA using SoaML QoS-aware model-driven SOA using SoaML Niels Schot A thesis submitted for the degree of MSc Computer Science University of Twente EEMCS - TRESE: Software Engineering Group Examination committee: Luís Ferreira

More information

NoMagic Product Comparison Brief

NoMagic Product Comparison Brief 1 NoMagic Product Comparison Brief Presented to: SET, AMSEWG Last Updated : September 15 th, 2017 Presented by: David Fields Overview NoMagic offers a variety of UML and SysML tools each with multiple

More information

Experimental transformations between Business Process and SOA models

Experimental transformations between Business Process and SOA models International Journal of Informatics Society, VOL.4, NO.2 (2012) 93-102 93 Experimental transformations between Business Process and SOA models Akira Tanaka, and Osamu Takahashi view5 LLC, Japan School

More information

iserver Free Archimate ArchiMate 1.0 Template Stencil: Getting from Started Orbus Guide Software Thanks for Downloading the Free ArchiMate Template! Orbus Software have created a set of Visio ArchiMate

More information

Requirements Elicitation

Requirements Elicitation Requirements Elicitation Introduction into Software Engineering Lecture 4 25. April 2007 Bernd Bruegge Applied Software Engineering Technische Universitaet Muenchen 1 Outline Motivation: Software Lifecycle

More information

Bizagi Process Management Suite as an Application of the Model Driven Architecture Approach for Developing Information Systems

Bizagi Process Management Suite as an Application of the Model Driven Architecture Approach for Developing Information Systems Bizagi Process Management Suite as an Application of the Model Driven Architecture Approach for Developing Information Systems Doi:10.5901/ajis.2014.v3n6p475 Abstract Oskeol Gjoni PHD Student at European

More information

challenges in domain-specific modeling raphaël mannadiar august 27, 2009

challenges in domain-specific modeling raphaël mannadiar august 27, 2009 challenges in domain-specific modeling raphaël mannadiar august 27, 2009 raphaël mannadiar challenges in domain-specific modeling 1/59 outline 1 introduction 2 approaches 3 debugging and simulation 4 differencing

More information

MSc(IT) Program. MSc(IT) Program Educational Objectives (PEO):

MSc(IT) Program. MSc(IT) Program Educational Objectives (PEO): MSc(IT) Program Master of Science (Information Technology) is an intensive program designed for students who wish to pursue a professional career in Information Technology. The courses have been carefully

More information

Vocabulary-Driven Enterprise Architecture Development Guidelines for DoDAF AV-2: Design and Development of the Integrated Dictionary

Vocabulary-Driven Enterprise Architecture Development Guidelines for DoDAF AV-2: Design and Development of the Integrated Dictionary Vocabulary-Driven Enterprise Architecture Development Guidelines for DoDAF AV-2: Design and Development of the Integrated Dictionary December 17, 2009 Version History Version Publication Date Author Description

More information

SysML Past, Present, and Future. J.D. Baker Sparx Systems Ambassador Sparx Systems Pty Ltd

SysML Past, Present, and Future. J.D. Baker Sparx Systems Ambassador Sparx Systems Pty Ltd SysML Past, Present, and Future J.D. Baker Sparx Systems Ambassador Sparx Systems Pty Ltd A Specification Produced by the OMG Process SysML 1.0 SysML 1.1 Etc. RFI optional Issued by Task Forces RFI responses

More information

Applying ISO/IEC Quality Model to Quality Requirements Engineering on Critical Software

Applying ISO/IEC Quality Model to Quality Requirements Engineering on Critical Software Applying ISO/IEC 9126-1 Quality Model to Quality Engineering on Critical Motoei AZUMA Department of Industrial and Management Systems Engineering School of Science and Engineering Waseda University azuma@azuma.mgmt.waseda.ac.jp

More information

SOFTWARE ARCHITECTURE & DESIGN INTRODUCTION

SOFTWARE ARCHITECTURE & DESIGN INTRODUCTION SOFTWARE ARCHITECTURE & DESIGN INTRODUCTION http://www.tutorialspoint.com/software_architecture_design/introduction.htm Copyright tutorialspoint.com The architecture of a system describes its major components,

More information

BSIF. A Freeware Framework for. Integrated Business Solutions Modeling. Using. Sparx Systems. Enterprise Architect

BSIF. A Freeware Framework for. Integrated Business Solutions Modeling. Using. Sparx Systems. Enterprise Architect 33 Chester Rd Tawa 5028 Wellington New Zealand P: (+64) 4 232-2092 m: (+64) 21 322 091 e: info@parkconsulting.co.nz BSIF A Freeware Framework for Integrated Business Solutions Modeling Using Sparx Systems

More information

Model Driven Development of Component Centric Applications

Model Driven Development of Component Centric Applications Model Driven Development of Component Centric Applications Andreas Heberle (entory AG), Rainer Neumann (PTV AG) Abstract. The development of applications has to be as efficient as possible. The Model Driven

More information

Software Language Engineering of Architectural Viewpoints

Software Language Engineering of Architectural Viewpoints Software Language Engineering of Architectural Viewpoints Elif Demirli and Bedir Tekinerdogan Department of Computer Engineering, Bilkent University, Ankara 06800, Turkey {demirli,bedir}@cs.bilkent.edu.tr

More information

Model Driven Engineering (MDE)

Model Driven Engineering (MDE) Model Driven Engineering (MDE) Yngve Lamo 1 1 Faculty of Engineering, Bergen University College, Norway 26 April 2011 Ålesund Outline Background Software Engineering History, SE Model Driven Engineering

More information

Parametric Maps for Performance-Based Urban Design

Parametric Maps for Performance-Based Urban Design Parametric Maps for Performance-Based Urban Design A lateral method for 3D urban design Jernej Vidmar University of Ljubljana, Faculty of Architecture, Slovenia http://www.modelur.com jernej.vidmar@modelur.com

More information

INTELLIGENT SYSTEM OF GEARBOXES DESIGN

INTELLIGENT SYSTEM OF GEARBOXES DESIGN 6 th INTERNATIONAL MULTIDISCIPLINARY CONFERENCE INTELLIGENT SYSTEM OF GEARBOXES DESIGN Eugen Valentin, BUTILĂ, Transilvania University of Braşov, Eroilor, 29, 500036 Gheorghe Leonte, MOGAN, Transilvania

More information

Development of Educational Software

Development of Educational Software Development of Educational Software Rosa M. Reis Abstract The use of computer networks and information technology are becoming an important part of the everyday work in almost all professions, especially

More information

Raising the Level of Development: Models, Architectures, Programs

Raising the Level of Development: Models, Architectures, Programs IBM Software Group Raising the Level of Development: Models, Architectures, Programs Dr. James Rumbaugh IBM Distinguished Engineer Why Is Software Difficult? Business domain and computer have different

More information

Software Service Engineering

Software Service Engineering Software Service Engineering Lecture 4: Unified Modeling Language Doctor Guangyu Gao Some contents and notes selected from Fowler, M. UML Distilled, 3rd edition. Addison-Wesley Unified Modeling Language

More information

Metaprogrammable Toolkit for Model-Integrated Computing

Metaprogrammable Toolkit for Model-Integrated Computing Metaprogrammable Toolkit for Model-Integrated Computing Akos Ledeczi, Miklos Maroti, Gabor Karsai and Greg Nordstrom Institute for Software Integrated Systems Vanderbilt University Abstract Model-Integrated

More information

Minsoo Ryu. College of Information and Communications Hanyang University.

Minsoo Ryu. College of Information and Communications Hanyang University. Software Reuse and Component-Based Software Engineering Minsoo Ryu College of Information and Communications Hanyang University msryu@hanyang.ac.kr Software Reuse Contents Components CBSE (Component-Based

More information

Overview of lectures today and Wednesday

Overview of lectures today and Wednesday Model-driven development (MDA), Software Oriented Architecture (SOA) and semantic web (exemplified by WSMO) Draft of presentation John Krogstie Professor, IDI, NTNU Senior Researcher, SINTEF ICT 1 Overview

More information

Requirements Engineering for Enterprise Systems

Requirements Engineering for Enterprise Systems Association for Information Systems AIS Electronic Library (AISeL) AMCIS 2001 Proceedings Americas Conference on Information Systems (AMCIS) December 2001 Requirements Engineering for Enterprise Systems

More information

Oral Questions. Unit-1 Concepts. Oral Question/Assignment/Gate Question with Answer

Oral Questions. Unit-1 Concepts. Oral Question/Assignment/Gate Question with Answer Unit-1 Concepts Oral Question/Assignment/Gate Question with Answer The Meta-Object Facility (MOF) is an Object Management Group (OMG) standard for model-driven engineering Object Management Group (OMG)

More information

Enhancing validation with Prototypes out of Requirements Model

Enhancing validation with Prototypes out of Requirements Model Enhancing validation with Prototypes out of Requirements Model Michael Deynet, Sabine Niebuhr, Björn Schindler Software Systems Engineering, Clausthal University of Technology, 38678 Clausthal-Zellerfeld,

More information

Systems Analysis and Design in a Changing World, Fourth Edition

Systems Analysis and Design in a Changing World, Fourth Edition Systems Analysis and Design in a Changing World, Fourth Edition Systems Analysis and Design in a Changing World, 4th Edition Learning Objectives Explain the purpose and various phases of the systems development

More information

Research on Computer Network Virtual Laboratory based on ASP.NET. JIA Xuebin 1, a

Research on Computer Network Virtual Laboratory based on ASP.NET. JIA Xuebin 1, a International Conference on Advances in Mechanical Engineering and Industrial Informatics (AMEII 2015) Research on Computer Network Virtual Laboratory based on ASP.NET JIA Xuebin 1, a 1 Department of Computer,

More information

Compositional Model Based Software Development

Compositional Model Based Software Development Compositional Model Based Software Development Prof. Dr. Bernhard Rumpe http://www.se-rwth.de/ Seite 2 Our Working Groups and Topics Automotive / Robotics Autonomous driving Functional architecture Variability

More information

A Solution Based on Modeling and Code Generation for Embedded Control System

A Solution Based on Modeling and Code Generation for Embedded Control System J. Software Engineering & Applications, 2009, 2: 160-164 doi:10.4236/jsea.2009.23023 Published Online October 2009 (http://www.scirp.org/journal/jsea) A Solution Based on Modeling and Code Generation for

More information

Creating and Analyzing Software Architecture

Creating and Analyzing Software Architecture Creating and Analyzing Software Architecture Dr. Igor Ivkovic iivkovic@uwaterloo.ca [with material from Software Architecture: Foundations, Theory, and Practice, by Taylor, Medvidovic, and Dashofy, published

More information

Future Directions for SysML v2 INCOSE IW MBSE Workshop January 28, 2017

Future Directions for SysML v2 INCOSE IW MBSE Workshop January 28, 2017 Future Directions for SysML v2 INCOSE IW MBSE Workshop January 28, 2017 Sanford Friedenthal safriedenthal@gmail.com 1/30/2017 Agenda Background System Modeling Environment (SME) SysML v2 Requirements Approach

More information

ANZSCO Descriptions The following list contains example descriptions of ICT units and employment duties for each nominated occupation ANZSCO code. And

ANZSCO Descriptions The following list contains example descriptions of ICT units and employment duties for each nominated occupation ANZSCO code. And ANZSCO Descriptions The following list contains example descriptions of ICT units and employment duties for each nominated occupation ANZSCO code. Content 261311 - Analyst Programmer... 2 135111 - Chief

More information

Second OMG Workshop on Web Services Modeling. Easy Development of Scalable Web Services Based on Model-Driven Process Management

Second OMG Workshop on Web Services Modeling. Easy Development of Scalable Web Services Based on Model-Driven Process Management Second OMG Workshop on Web Services Modeling Easy Development of Scalable Web Services Based on Model-Driven Process Management 88 solutions Chief Technology Officer 2003 Outline! Introduction to Web Services!

More information

DB2 for z/os: Programmer Essentials for Designing, Building and Tuning

DB2 for z/os: Programmer Essentials for Designing, Building and Tuning Brett Elam bjelam@us.ibm.com - DB2 for z/os: Programmer Essentials for Designing, Building and Tuning April 4, 2013 DB2 for z/os: Programmer Essentials for Designing, Building and Tuning Information Management

More information

A PROPOSAL FOR MODELING THE CONTROL SYSTEM FOR THE SPANISH LIGHT SOURCE IN UML

A PROPOSAL FOR MODELING THE CONTROL SYSTEM FOR THE SPANISH LIGHT SOURCE IN UML A PROPOSAL FOR MODELING THE CONTROL SYSTEM FOR THE SPANISH LIGHT SOURCE IN UML D. Beltran*, LLS, Barcelona, Spain M. Gonzalez, CERN, Geneva, Switzerlan Abstract CELLS (Consorcio para la construcción, equipamiento

More information

Impact of Dependency Graph in Software Testing

Impact of Dependency Graph in Software Testing Impact of Dependency Graph in Software Testing Pardeep Kaur 1, Er. Rupinder Singh 2 1 Computer Science Department, Chandigarh University, Gharuan, Punjab 2 Assistant Professor, Computer Science Department,

More information

SOFTWARE DESIGN COSC 4353 / Dr. Raj Singh

SOFTWARE DESIGN COSC 4353 / Dr. Raj Singh SOFTWARE DESIGN COSC 4353 / 6353 Dr. Raj Singh UML - History 2 The Unified Modeling Language (UML) is a general purpose modeling language designed to provide a standard way to visualize the design of a

More information

UPDM PLUGIN. version user guide

UPDM PLUGIN. version user guide UPDM PLUGIN version 17.0 user guide No Magic, Inc. 2011 All material contained herein is considered proprietary information owned by No Magic, Inc. and is not to be shared, copied, or reproduced by any

More information

Designing a System Engineering Environment in a structured way

Designing a System Engineering Environment in a structured way Designing a System Engineering Environment in a structured way Anna Todino Ivo Viglietti Bruno Tranchero Leonardo-Finmeccanica Aircraft Division Torino, Italy Copyright held by the authors. Rubén de Juan

More information