Deliverable D4.5. REMICS Migrate Principles and Methods

Size: px
Start display at page:

Download "Deliverable D4.5. REMICS Migrate Principles and Methods"

Transcription

1 REuse and Migration of legacy applications to Interoperable Cloud Services REMICS Small or Medium-scale Focused Research Project (STREP) Project No Deliverable D4.5 REMICS Migrate Principles and Methods Work Package 4 Leading partner: SOFTEAM Author(s): Antonin Abhervé Andrey Sadovykh, Satish Srirama, Pelle Jakovits, Michael Smialek, Wiktor Nowakowski, Nicolas Ferry, Brice Morin Dissemination level: Public Delivery Date: Final Version: Copyright REMICS Consortium

2 Versioning and contribution history Version Description Contributors V 0.0 Document Structure Antonin Abhervé ( Softeam ) V 0.1 Context and Introduction Antonin Abhervé ( Softeam ) V 0.2 Migrate Principle Antonin Abhervé ( Softeam ) V 0.3 Tools & Techniques by Modelio Antonin Abhervé ( Softeam ) V 0.4 Migration Example Antonin Abhervé ( Softeam ) V 0.5 Cloud resource provisioning & Automated deployment of cloud application Nicolas Ferry, Brice Morin (SINTEF) V 0.6 Requirement Based Migration Wiktor Nowakowski (WUT) V 0.7 OLAP-OLTP Principles Huber Flores (UT) V 0.8 Enabling the horizontal scaling of Pelle Jakovits (UT) migrated OLTP applications V 0.9 Summary and Conclusion Antonin Abhervé (Softeam) V 1.0 Review Copyright REMICS Consortium Page 2 / 50

3 Executive Summary With the advent of cloud computing platforms, many companies are studying the migration of legacy applications to the cloud. The main difficulty in dealing with such systems is obsolescence, either due to the dependency on an obsolete platform, incomplete/incorrect documentation or using an inappropriate architecture for the cloud. The FP7 project REMICS (Reuse and Migration of legacy applications to Interoperable Cloud Services) intends to provide a model-driven approach to extract valuable information from existing code and to automate the refactoring of old code into cloud enabled architectures. In order to do so, REMICS proposes a model-driven approach based on three main toolsupported steps: Requirements Engineering, Recovery, Migration and Validation. Work Package 4 addresses the Migration aspects of the REMICS process whose main goal is to provide a generic process for building service cloud applications starting from recovered business models. This model based migration approach consists of three distinct phases: the Architecture Recovery Phase which allow the identification and refactoring of high level components of legacy application, the Migration Phase which manages business and technologic migration as well as the Deployment Phase which deals with the deployment of the application migrated to the cloud. The work to be performed during each step is partially automated by the tools developed in the project. This document is going to focus on the description of the process and associated tools and on our experience in applying the process in an industrial case study. It summarizes the algorithms, transformations and methods developed in WP4, including architecture decomposition, SOA and Cloud Computing patterns and transformations as well as design by composition methods. It includes the OLAP-OLTP principles, SOA and Cloud Computing patterns and transformations and requirements-based migrations. Intended Audience The first part of this document presents general information about migrate methods developed in the REMICS context and this is dedicated to a wide community of specialists and scientists interested in learning about the REMICS architecture and process. The second part is a detailed description of migration processes, including a description of tools and techniques involved during each step of the process, and is dedicated to an audience looking for detailed information about the techniques developed in the project. The final part provides a detailed example of the application of this process to a concrete case study. This part is dedicated to a wide community that would apply the migration process. Copyright REMICS Consortium Page 3 / 50

4 Terminology and Abbreviations ADM Architecture Driven Modernization IaaS Infrastructure as a Service IT Information Technology KDM Knowledge Discovery Metamodel MDA Model Driven Architecture MDE Model Driven Environment MDI Model Driven Interoperability MDD Model Driven Development OMG Object Management Group PaaS Platform as a Service SaaS Infrastructure as a Service SOA Service Oriented Architecture SOAML Service oriented architecture Modelling Language OLAP Online-Transaction Processing OLTP Online-Analytical Processing REMICS Reuse and Migration of legacy applications to Interoperable Cloud Services Copyright REMICS Consortium Page 4 / 50

5 Table of Contents EXECUTIVE SUMMARY... 3 INTENDED AUDIENCE... 3 TERMINOLOGY AND ABBREVIATIONS... 4 TABLE OF CONTENTS INTRODUCTION CONTEXT AND OBJECTIVES REQUIREMENTS TRACEABILITY MIGRATE PRINCIPLES IN REMICS MIGRATE OVERVIEW MIGRATE PROCESS Model Based Migration Requirement based Migration Process Tools involved during migration process MODEL DRIVEN APPROACH Architecture Driven Modernization Models involved in the migration process RECOVERY PHASE RECOVERY PROCESS TOOLS AND TECHNIQUES Component Recovery from Legacy Code Domain Model Identification and Exploitation MIGRATION PHASE MIGRATION PROCESS TOOLS AND TECHNIQUES Model Based Architecture Refactoring Design by Pattern Composition Model to Code Transformation DEPLOYMENT PHASE DEPLOYMENT PROCESS TOOLS AND TECHNIQUES Deployment Modelling Cloud Resource Provisioning Automating Deployment of Cloud Application OLAP-OLTP Principles Enabling the horizontal scaling of migrated OLTP applications SUMMARY AND CONCLUSIONS Copyright REMICS Consortium Page 5 / 50

6 Table of Figures Figure 1 : Migrate Toolkit in Global Process... 7 Figure 2 : Migration Process at high level... 9 Figure 3 : The Model Based migration process...10 Figure 4 : The Requirement Based Migration Process...11 Figure 5 : Tools involved in migration...13 Figure 6 : Models involved in migration...14 Figure 7 : Detailed view of migration models...16 Figure 8 : Recovery Phases in details...17 Figure 9 : State Diagram of recovery phases...17 Figure 10 : Behavior Modelling...18 Figure 11 : Design Structure Matrix applied to a flow chart...20 Figure 12 : Analyze application architecture using DSM...21 Figure 13 : An example notion diagram in RSL...23 Figure 14 : An example of alternative use case scenarios representations...24 Figure 15 : Problem listing in the ReDSeeDS tool...25 Figure 16 : Migration Phases in details...26 Figure 17 : State Diagram of Migration Phases...27 Figure 18 : Pattern Catalogue...30 Figure 19 : Implementation Model...31 Figure 20 : From Activity Diagram to Code...32 Figure 21 : Dynamic Core Architecture...33 Figure 22 : Deployment Phases in details...35 Figure 23 : Deployment Metamodel Deployment Architecture...36 Figure 24 : Deployment Metamodel - Resource Providing...37 Figure 25 : Example of CPSM derivation...38 Figure 26 : Example of CPSM runtime enrichment...39 Figure 27 : Life-cycle of an application...40 Figure 28 : Core components for scaling an OLTP system in a cloud instance...42 Figure 29 : A basic scale out configuration system with uniform distribution of transactions.43 Figure 30 : A basic scale out configuration of an OLTP system with sticky sessions...44 Figure 31 : Augmenting computational capabilities to improve performance of an OLAP system when facing data intensive processing...44 Figure 32 : Illustration of the original deployment model of the migrated Dome pilot case.... Error! Bookmark not defined. Figure 33 : Software components separated between two cloud instances. Error! Bookmark not defined. Figure 34 : Modified deployment dome behavior node and added load balancer.... Error! Bookmark not defined. Figure 35 : Performance monitoring and autoscaling logic introduced to the deployment.... Error! Bookmark not defined. Figure 36 : Sequence diagram of the autoscaler component integration with CloudML and CollectD... Error! Bookmark not defined. Copyright REMICS Consortium Page 6 / 50

7 1 Introduction 1.1 Context and Objectives In order to Recover the Source Architecture, the process starts with analysis of the available legacy system artefacts: source code, binaries, documentation, users knowledge, configuration files, execution logs and traces. This activity is supported by automated reverse engineering and knowledge discovery methods. This information is then translated into models covering different aspects of the architecture: Business Processes, Business Rules, Components, Implementation and Test specifications. This model is the starting point for the Migrate activity. During this activity the migrated system is reorganized into new service components. The new architecture of the migrated system is built by applying specific SoaML / Cloud Computing patterns and methods like architecture decomposition, legacy components wrapping and legacy component replacement with new discovered cloud services. Design by Service Composition completes the methods providing developers with tools simplifying development by reusing the services and components available in the cloud. The system is rebuilt for a new platform in a forward MDA process by applying a specific transformation dedicated to service cloud platforms. Finally, the migrated application deployment is modelled using the PIM4Cloud profile and the rebuilt system is deployed on a cloud computing environment using resource provisioning and automated deployment tools (Cloud Script). Figure 1 : Migrate Toolkit in Global Process The migration process (Figure 1).is supported by two complementary activities: Model Driven Interoperability and Validate, Control and Supervise. Model Driven Interoperability (MDI) aims at adapting existing services to services required to complete the Target Architecture in component replacement or service composition sub-activities. The Validate, Control and Supervise activity regroups the technologies dedicated to ensuring the correctness of IT system migration. Indeed, it should be ensured that, e.g., the recovered Source Architecture corresponds to the Legacy System, both for structural and behavioural aspects. In the Migrate activity, the Business Processes and Rules of the Source Architecture have to be fully implemented in the Target Architecture. Copyright REMICS Consortium Page 7 / 50

8 1.2 Requirements traceability D4.5 is a deliverable of WP4 whose original objectives are: 1. to specify PIM4Cloud SOAML extension 2. to define advanced methods for architecture decomposition 3. to define advanced automated methods for applying SOA and Cloud Computing patterns and transformations for legacy component replacement and wrapping 4. to define advanced methods for architecture design by service composition 5. to specify design patterns of OLAP and OLTP systems from the deployment perspective 6. to develop transformations, connectors and methods for cost-effective migration of legacy applications and systems into cloud infrastructure 7. to define methods and implement tools to transform service-oriented architectural models from requirements-level specifications. D4.5 fulfils the objectives as follows: 1 See Section Deployment Modelling 2 See Section Component Recovery from Legacy Code 3 See Section Recovery Process and Migration Process 4 See Section Design by Pattern Composition 5 See Section OLAP-OLTP Principles 6 See Section Migration Process and Model Based Architecture Refactoring and Executable Model 7 See Section Requirement Based Migration Process and Domain Model Identification and Exploitation Copyright REMICS Consortium Page 8 / 50

9 2 Migrate Principles in REMICS 2.1 Migrate Overview Cloud computing and SOA are recognized game-changing technologies for a cost-efficient and reliable service delivery. Software as a Service paradigm becomes more and more popular enabling flexible license payment schemas and moving the infrastructure management costs from consumers to service providers. However, building a SaaS system from scratch may require a huge investment in terms of time and effort. Moreover, the organization legacy systems are difficult to reuse due to platform, documentation and architecture obsolescence. OMG MDA (Model Driven Architecture) and related efforts around domain-specific languages have gained much popularity. These technologies put the model in the centre of the software engineering process (MDE). The software products are built with subsequent model refinements and transformations from business models (process, rules, motivation), down to component architectures (e.g. SOA), detailed platform specific design and finally implementation. Similarly, OMG ADM (Architecture Driven Modernization) proposes to start with knowledge discovery to recover models and to re-build the new system in a forward MDA process. The REMICS project provides a new development paradigm for the migration of existing IT systems to service cloud platforms through innovative model driven technologies. The baseline concept is the Architecture Driven Modernization (ADM) by OMG. In this concept, the modernization starts with extraction of the architecture of the legacy application. Having the architectural model helps to analyse the legacy system, to identify the best ways for modernization and to benefit from MDE technologies for generation of the new system. The project significantly enhances this generic process by proposing sets of advanced technologies for architecture recovery and migration involving innovative technologies such as Model Driven Interoperability and Models@Runtime. The Migrate work package provides a generic process for building service cloud applications starting from recovered business models. This model based migration approach consists of three distinct phases: the Recovery Phase, the Model Based Migration Phase and the Deployment Phase. Each of these phases is composed of several models, each representing the application in a different state of the migration process. The legacy application is migrated to a new application through various model driven modernization tools(figure 2).. Figure 2 : Migration Process at high level REMICS also provides a second migration approach based on the Requirement based Migration process in which the migrated application is rebuilt from the requirements of the legacy application. This approach can be completed by the Model Based Migration approach. Copyright REMICS Consortium Page 9 / 50

10 The migration process requires several mechanisms which can be applied on models representing the application in a different state of the migration process: Refactoring into Models assisted by tools. This step includes the refactoring of the application architecture, the Requirement based Migration, the addition of new services, and the implementation of SOA and PIM4Cloud patterns. Automatic model transformation to move from one model to another, such as the passage of the architecture model to an implementation model. Deployment Modelling and Deployment Automation of migrated applications on the Cloud. 2.2 Migrate Process Model Based Migration The Migrate Process takes as input the Recovered Model provided by the Recovery phases developed in WP3 of REMICS Project. This application is presented as a UML model which centralizes all application logic and knowledge of the legacy system, including the data model, business logic and user interface organisation. A second approach takes as input the requirements of the legacy application in the aim of rebuilding a new application that provides a set of equivalent services and functions to those offered by the legacy application. This process consists of three distinct phases: the Recovery Phase which is to continue the identification of high level service components and the modelling of their internal structure, the Migration Phase which includes the refactoring operations of the application architecture and production model implementation of the new application and the Deployment Phase which addresses the deployment of this application on a cloud computing platform(figure 3). Figure 3 : The Model Based migration process At the end of the migration phases, we obtain a fully functional application deployed on the Cloud. Copyright REMICS Consortium Page 10 / 50

11 2.2.2 Requirement based Migration Process The Requirement based Migration process offers an alternative to model based Migration to migrate a legacy application. This process does not use the source code of application legacy application as input and is based instead on requirements extracted from the legacy system. Figure 4 : The Requirement Based Migration Process Figure 4 shows an overview of the migration process in which the requirements-level model extracted from the legacy system is migrated to a target application code. The migration process encompasses the techniques of model-driven forward engineering. Throughout this process we use the essential specifications according to the RSL language presented in Deliverable D3.6 REMICS KDM extension for application logic. First, the initial RSL model can be refactored according to the needs of the migration phase and to meet the validation rules. Because the refined model conforms to a precise metamodel, a model transformation engine can be used to generate the target system structure models and code automatically. Subsequent steps of the process are described below Refactor Model The migration process starts with re-factorisation of the initial model in RSL. This model contains use case scenarios describing sequences of user-system interactions in relation to the domain logic within which a software system operates. Such information is extracted during the recovery process from a Copyright REMICS Consortium Page 11 / 50

12 legacy system by determining its observable behaviour and then stored in the form of models in RSL. The recovery process using the TALE tool is described in details in Deliverables D3.4 REMICS Recover Toolkit and D3.5 REMICS Recover Principles and Methods. Thanks to the characteristics of the RSL language, the initial model is easily understandable to people, even those with little knowledge of the legacy system. This gives the possibility of its easy extension and modification. Usually, the initial model includes all the application logic information that is to be migrated to the new system. However, in some situations, changes have to be introduced. Firstly, changes are needed in order to cater the migrated system to new or changed functionality or to optimize some scenario flows, e.g. by applying standard application logic patterns (see Deliverable D3.6 REMICS KDM extension for application logic for details). The most important changes, however, include construction of notions representing the target application domain model. The recovered RSL model contains only notions that could be automatically extracted from the old system, based on its UI analysis. Such notions, called data views, are associated with attributes of simple types and represent data passed from the system to the user through the UI and vice versa (e.g. client data has attributes like name, address, etc.). These notions do not necessarily reflect the domain model of the legacy system. Based on them, Data Transfer Objects are generated in the target system when performing the transformation from RSL to code. Therefore, notions being compositions of attributes pointed by data view notions, and thereby making up the domain model, need to be created manually Validate Model The refined model should be validated in order to ensure that it conforms to the grammatical rules and conventions of the RSL language and that the specification is coherent. This validation helps to ensure that the transformation performs well and all elements of the target model and code are generated correctly. The validation mechanism checks for example the sequence of scenario sentences, linking between scenarios and the domain specification, relationships between notions, etc Transformation RSL to Code The refined and validated RSL model, containing both the still relevant legacy requirements that were recovered and optionally some new ones, is an input for the transformation task. During this task, a target system design and implementation-level artefacts are generated. The RSL to Code transformation generates the full structure of the system following the MVP architectural pattern (Potel, 1996), including complete method contents for the application logic (Presenter) and presentation (View) layers. It also provides a code skeleton for the domain logic (Model) layer. To run a transformation, the user has to choose one of the available transformation programs. If the selected transformation is configurable, configuration parameters also have to be given. The rest of the task is performed automatically from the user perspective. In fact, this is done in three subsequent steps. Firstly, the actual model transformation is performed by a model transformation engine. It takes the refined and valid RSL specification as an input and produces a UML model with embedded code in method bodies out of it. Then, in order to generate complete source code, the generated UML model has to be exported to an external UML modelling tool that provides code generation capabilities, for example Modelio. This task is automatically carried out just after the transformation is finished and all the UML model elements can be transferred to the external tool through its API. Finally, the UML tool s code generator is used to generate the final source code of the target system, reflecting the requirements-level constructs from the refined RSL model. Copyright REMICS Consortium Page 12 / 50

13 2.2.3 Tools involved during migration process Four partners are involved in the development of the Migrate Toolkit: The UML model extracts from the legacy code by Nefective are transferred to the SOFTEAM tools based on the Modelio environment. The Modelio toolkit regroups application refactoring, generation and cloud environment modelling tools. The model of the Web application is created through a continuous refactoring and refinement process. This model is used to generate application code. Thereafter, a cloud deployment environment is modelled with PIM4Cloud. In a parallel process, the RedSeedS tool provided by WUT offers an alternative migration mechanism based on requirements analysis of the legacy application. SINTEF's CloudML is used to deploy the migrated application on a Cloud Computing environment. Finally, the Desktop2Cloud Migration tool provided by the University of Tartu is used to deploy the cloud images to the infrastructure, configure the installation and manage the application life-cycle. Nefective Softeam Sintef Uni. Tartu BluAge Model Recovery Modelio Migration Process Support Component Model Discovery Design By Pattern Composition Model Driven Refactoring Code generation from Architecture Deployment Modeling CloudML Cloud Resource Provisioning Deployment Automation Desktom4Cloud Image Transformation Deployment Configuration Schalability Management Life Cycle Management WUT RedSeedS Requirement Based Migration Figure 5 : Tools involved in migration Copyright REMICS Consortium Page 13 / 50

14 2.3 Model Driven Approach Architecture Driven Modernization Modernization is subject to an increasing demand for IT systems. Companies often change technological platforms due to numerous factors including risks of platform and/or language obsolescence, ownership cost, system usability, performance, and scalability issues. Changes in business rules and processes may also drastically renew requirements for IT systems, underlying technological platforms and thus lead to modernization. The REMICS project promotes a new development paradigm for the migration of existing IT systems to service cloud platforms through innovative model driven technologies. The baseline concept is the Architecture Driven Modernization (ADM) by OMG. Architecture-Driven Modernization is the process of understanding and evolving existing software assets for the purpose of software improvement, modifications, interoperability, refactoring, restructuring, reuse, porting, migration, language translation, enterprise application integration, SOA, and MDA migration. In this concept, the modernization process starts with analysis of the available legacy system artefacts: source code, binaries, documentation, users knowledge, configuration files, execution logs and traces. This activity is supported by automated reverse engineering and knowledge discovery methods which allow us to Recover the Source Architecture. This model is the starting point for the Migrate activity. During this activity, the new architecture of the migrated system is built by applying specific SOA/Cloud Computing patterns and methods such as component identification, architecture decomposition, legacy component wrapping and legacy component replacement with new discovered cloud services. This method is complemented with generic Design by Service Composition methods providing developers with tools simplifying development by reusing the services and components available in the cloud. The integration of this new Cloud service is carried out on the model at Architecture Layer using pattern tools. The system will be rebuilt for a new platform in a forward MDA process by applying specific transformation dedicated to service cloud platforms Models involved in the migration process To support the Architecture Driven Modernization strategy, we have defined a set of models representing the migrated application at successive stages of the migration process. Figure 6 : Models involved in migration The recovery phase is based on two models, the recovered model and the component model A Recovered Model corresponds to the initial model, resulting from the work of WP3. This model is imported into the migration toolkit using the XMI format. Copyright REMICS Consortium Page 14 / 50

15 A Component Model is the result of the breakdown of architecture into high grain components providing business services. This model is generated from the initial model by model transformation and involves the Component Recovery Tool. The migration phase involves the architecture model, implementation model and test model. The Architecture Model highlights the organisation of applications, services provided, data types exchanged by these services, data model and presentation aspects. It is on this model that most of refactoring work is carried out. At this layer, the SoaML profile is used to model application architecture. This model is obtained by model transformation of the Component Model. The Implementation Model represents the code of the target application. This model is specific to the platform of execution. This model is obtained by model transformation from the model architecture model. During this transformation, it is necessary to choose the target execution platform of the application. An executable application can be generated from this model but at this level, the context of application deployment is not taken into consideration. The test model is intended to record the modelling of integration tests and unit tests. These tests will be the results of the integration of the results of Work Package 6. The deployment phase involves a deployment model and distributed test model. The Deployment Model allows the modelling of the deployment of the application on a cloud computing platform. Service components identified at architecture layer are used to make the link between the architecture model and the deployment model. Modelling application deployment is based on usage of the PIM4Cloud profile. Copyright REMICS Consortium Page 15 / 50

16 Figure 7 : Detailed view of migration models To this horizontal breakdown of the migration process, we add a vertical division thus allowing isolation of the major aspects of a service-oriented application. Starting from the Model Component of recovery phase, we isolate the Data Model, Behaviour and Presentation aspects of implementation. Thereafter, each of these aspects is processed independently during the migration process. These are represented in a more detailed way in the remainder of this document. Copyright REMICS Consortium Page 16 / 50

17 3 Recovery Phase 3.1 Recovery Process The service cloud platforms impose the development paradigm through highly decoupled reconfigurable SOA components. Architecture recovery activities usually result in monolithic and closely coupled architectures. Migration of legacy applications to a cloud computing platform implies deep refactoring of their architecture. To facilitate interaction between the different modules, services or systems while maintaining their individual independence, the main goal is to have several independent and reusable services. The objective of the Recovery Phase is the identification of weakly coupled Model Components extracted from the initial recovered model. Figure 8 : Recovery Phases in detail This phase starts with the import of recovered (UML) models into Modelio. We use the Component Discovery tool on the initial recovered model to identify potential Architecture Components by dependency analysis. Using Design Structure Matrix and Decomposition Algorithm, a decomposition of the Recovered Model into components is proposed to operators. Based on their knowledge of the business aspects of the application to migrate, operators can validate this decomposition or restart a dependency analysis process after adding some dependency information to the recovered model. Once the decomposition has validated a set of transformation rules, the model is refactored into components following this decomposition. Once these components are discovered, we manually identify the type of each component by separating components of Presentation, Behaviour and Entity types. This Component Model is later used as input for the Architecture Model of the migration Phase. A model transformation is applied to the Component Model to generate the Architecture Model. According to the component types identified in previous phase, a set of Transformation Rules is applied to each component in order to generate the Architecture Model. Figure 9 : State Diagram of recovery phases Copyright REMICS Consortium Page 17 / 50

18 3.2 Tools and Techniques Component Recovery from Legacy Code Component Modelling The migration process defined in REMICS includes the task of refactoring the architecture of existing legacy applications using design patterns for SOA and cloud computing. The Component Model is the result of this breakdown activity. The component model is created from the initial recovered model using the Component Recovery Tool. The Initial recovered Model is the model resulting from the recovery phases of the overall REMICS process (Outcome of work performed by WP3). Stored in XML format (EMF UML2), the model is imported into Modelio using standard import / export functions. The Component Recovery Tool is used to extract the Component Model from the initial model. The usage of this tool is described more precisely in the next chapter of this document. The Component Model is structured by set of components and each of these contains a sub part of the initial model. As the objective of the breakdown process is to obtain components that can be deployed on distributed architectures, they are weakly coupled. During the component discovery process, we identify three types of component: components belonging to the Presentation layer, components belonging to the Behaviour layer and components belonging to the Entity layer. Presentation Components include model elements representing the modelling of the graphical user interface application. The Behaviour Component includes the modelling of services provided by the application grouped into interfaces. This layer also contains the type definition of data exchanged by services. Services are represented by UML operations and their behaviour is expressed as Activity diagrams. Figure 10 : Behaviour Modelling Entity components contain the Data Model of the application. They are represented by means of UML class diagrams Component Recovery Technique The Component Recovery tool provides methods for component identification, discovery of their services, interfaces and dependencies. This tool is the first in the process of legacy application architecture decomposition into components compatible with cloud computing platforms, and achieves the breakdown of the application architecture into high level components providing business services. The Component Recovery tool operates following three principles: dependence discovery between the model elements of the application architecture, the representation of dependencies in the form of dependency diagrams or a dependency matrix dedicated to human analysis, and the exploitation of this matrix to suggest reorganization application architecture into components. A key point is that all the information required to achieve a division into business components cannot be discovered by analysing the input model. Some business constraints are not expressed in the original model. This is why this tool allows the user to define these constraints between the model elements, either by defining the dependencies of the business processes steps or by acting directly on the composition of a part of models components: Copyright REMICS Consortium Page 18 / 50

19 Tool Input: The Component Recovery tool takes as input a UML model representing the architecture of the legacy application provided by the recover phases of the REMICS migration process. Search for Dependence: The methodology for the automatic discovery of dependencies between services based on dependency analysis of the UML input model. The tools aims at integrating concepts for consolidation of services based on business functionality. However, these concepts are not directly extractable from the code, and must involve a human operator. The process therefore integrates into a single methodology the dependencies extracted from the UML model and the business dependencies resulting from the manual intervention. Dependency Analysis: The analysis of these dependencies and the division into components is performed using the Design Structure Matrix, a mathematical tool that allows the relationships between elements of a system to be displayed in a compact and analytically advantageous format. Dependency analysis is performed by applying a processing algorithm to the Architecture Design Structure Matrix, thus helping to identify groups of strongly coupled model elements. At the end of this analysis, the tools suggest a breakdown of the original application into components Component Discovery methodology Dependency Analysis for Architecture Decomposition The extraction and exploitation of dependencies has been a subject of research since Parnas first formulated the notion of inter-module dependency in his early papers. The extraction of a dependency model from a UML model is based on two prerequisites: the definition of the units of the analysis, and the definition of business rules that characterise a dependency between two of these. The first step is to define the units of analysis on which we will apply the dependency analysis. A UML model is a strongly connected model. In this context, it is necessary to work and apply the dependency extraction algorithm to a subset of the model. The choice of the elements to include in this subset is essential because these elements will become the atomic elements from which we will apply our component discovery algorithm. The second step is to define a set of business rules that allow us to identify the dependency between two of these atomic elements. The definition of these business rules allows the user to distinguish those dependencies that are problematic because they violate architectural assumptions from those that are expected and reasonable. Some of these business rules are generic and can be applied to any UML model, while others are specific to the format of the analysed model. Here are some examples of generic business rules used for most analyses: When two elements that belong to two different components are linked, a link, which is called an abstract link, is established between these two components. An abstract link is a link between two different components containing any two linked elements, each belonging to one of the two components. If an element A is a redefinition of an element B, then element A depends on B. Once these two steps have been satisfied, we apply the following algorithm on the selected model element subset. Defined business rules are used in steps (b) and (c) to determine the dependency relationship between elements: A. Retrieve the root element from which the analysis is initially launched. Let (X) represent the current level of analysis and it is initially equal to 0 (representing the root element).a level brings together elements with the same parent composition. B. For each element of level (X): a. Get the type of the element being analysed. b. Add in (X + 1) the direct dependencies (element linked following business rules) of the current element. c. If it is a composed element (an element with sub elements): Copyright REMICS Consortium Page 19 / 50

20 - Analyse the dependencies in the component or sub components of the element. - Add abstract dependencies at (X + 1) level. d. Increment (X). C. If (X) is not equal to the level entered, or if there are no more dependencies to visit, or if the root element is reached, go to step B, or else stop. We are unable to extract all the information from an input data model that would be needed to fully realize the automatic discovery of components. Some information, such as the business role of the architectural elements, may be not present in the input model. For this reason, the intervention of a user, who knows the business side of the application being upgraded, is required. To support this approach, the tool offers several documentation generators from the UML model. Generated documents are used to guide the user during the component discovery process, and to highlight the essential information extracted from the UML model. Design Structure Matrix A Design Structure Matrix is a mathematical tool that allows the relationships between elements of a system to be displayed in a compact and analytically advantageous format. A Design Structure Matrix is presented as a square matrix with identical row and column labels. The relationships between the system components are represented by a value at the intersection of the rows and columns of the matrix. Once initialized from data collected during the phase of dependency analysis between system components, we will be able to apply several algorithms to the Design Structure Matrix in order to highlight the coupling relationships between elements of the system. Using this mechanism, we will be able to propose to the developer a decomposition of legacy application architecture into consistent subsystems. The design structure matrix (DSM), also referred to as dependency structure method and dependency structure matrix, is an accepted method that enhances and analyses the design of products and systems (Figure 11). The use of the design structure matrix in system modelling can be traced back to Warfield in the 1970s and Steward (1981). Interest in this mathematical tool grew in the 1990s. Eppinger (1994) used a matrix representation to capture both the sequence of, and the technical relationships among, design tasks, which was then analysed in order to find alternative sequences or definitions of the tasks. Eppinger (2001) also used DSM in the context of project management. The DSM lists all constituent subsystems or activities and the corresponding information exchange and dependency patterns. Figure 11 : Design Structure Matrix applied to a flow chart There are several types of Design Structure Matrices developed to handle problems of different kinds, such as product developers, project planners, project managers, system engineers, and organisational designers: Architecture DSM (Component-Based): Used for modelling system architectures based on components or subsystems and their relationships. Team-Based or Organisation DSM (Team-Based): Used for modelling organisation structures based on people or groups and their interactions. Schedule DSM (Activity-Based): Used for modelling processes and activity networks based on activities and their information flows and other dependencies. Copyright REMICS Consortium Page 20 / 50

21 Throughout the REMICS project, we focus specifically on the Architecture Design Structure Matrix, which is very appropriate for modelling systems based on components. After being initialised with information regarding the dependencies between the elements of the system, the DSM becomes a good basis for the execution of a set of matrix operations that allow relationships between elements of the system to be identified. In part of the REMICS project, we will use Partitioning, Banding and Clustering type processing algorithms. The Architecture DSM model An Architecture Design Structure Matrix is a type of matrix that can be used to represent interactions among elements in system architecture. The final goal of this approach is to highlight the system decomposition into subsystems. The construction, initialisation and exploitation of an Architecture DSM in the component discovery methodology involve three steps: A. Decompose the system into elements: With regards to the REMICS project, we work from a UML model that is already structured into services. These services are the elements we want to reorganise into components. We use them as the rows and columns of the matrix elements. B. Collect and document the interactions between the elements: In the context of the REMICS project, collected interactions come from two sources: the UML model and business data provided by the user. We use the relationships resulting from the dependency extraction phase from a UML model. C. Analyse the reintegration of the elements: We use the clustering, banding and partitioning algorithms on the matrix in order to bring up parallel, sequential or interdependent groups of services(figure 12). Figure 12 : Analyse application architecture using DSM For the REMICS migration process, we seek to manipulate two different types of dependencies, those resulting from the exploitation of the UML model and those modelled by business users. These two types of interaction data are not as important when it comes to implementing the processing algorithms of the matrix. We must be able to differentiate between the two types of relationships between elements stocked in the matrix. Pimmler and Eppiger suggest two ways of doing this. Firstly, we can use a single threedimensional matrix which can represent multiple types of interaction data if each off-diagonal cell contains a vector. A second option is to use a quantification scheme in association with the matrix. Off-diagonal square marks in the matrix are replaced by a number, an integer that can associate a weight to the dependency between elements. Processing algorithms of Architecture Design Structure Matrix To realize the dependency analysis, we apply a processing algorithm to the Architecture Design Structure Matrix. Once created and initialized, the design structure matrices are found to be an Copyright REMICS Consortium Page 21 / 50

22 excellent support for the Partitioning, Banding and Clustering algorithms. Each of these algorithms allows us to highlight different aspects of a relationship between elements of the system. Clustering Decomposition: The clustering algorithm can provide a new organization into system decomposition. The main objective is to maximize interaction between elements within clusters and minimize interaction between clusters. This algorithm also allows the size of clusters to be minimized. Partitioning Decomposition: The partitioning algorithm is the process that brings together components in order to have the least data flow going back. It provides an automatic mechanism for architectural discovery in a large model. Partitioning eliminates cycles by forming subsystems. The groupings and orderings recommended by these algorithms can be applied in a straightforward manner, to reorganise the code base so that its inherent structure matches the desired structure. Banding Decomposition: The banding algorithms (band decomposition) offer an organisation of components that provides the best execution path. The components belonging to a group are independent and can run in parallel. Components belonging to different bands run sequentially and cannot run simultaneously. Copyright REMICS Consortium Page 22 / 50

23 3.2.2 Domain Model Identification and Exploitation The Domain Model Identification is mainly used during the requirement based migration process (described in section 2.2.2).This process is performed either automatically or manually within the ReDSeeDS tool. Below, is a description of some techniques and tips which are helpful while performing the tasks Model refactoring techniques The Refactor Model task involves manual modifications of the recovered model in RSL as mentioned in the previous section. All those modifications can be made in the ReDSeeDS tool, which offers a comprehensive RSL editor. A basic application logic pattern library is also supplied with the tool. Creating domain model The most crucial modification is a reconstruction of the legacy system s domain model which cannot be retrieved automatically using the external system s behaviour observation techniques. Domain concepts can be created in two ways. Either by adding them in a tree-like Software Case Browser or by using the diagramming feature of the RSL Editor. While the Software Case Browser is better suited for managing the structure of the domain specification, the recommended technique for creating relationships between notions is to use notion diagrams. For clarity, it is best to create one notion diagram for each use case. Such diagrams show only those notions that are referenced from the scenarios of the use case the diagram was created for. An example diagram is shown in the figure below. Figure 13 : An example notion diagram in RSL There are three tiers that can be distinguished in this diagram. The topmost tier is formed by notions representing UI elements of the recovered/target system. These can be notions of the following types: «screen», «button», «and message», «confirmation dialog». They are directly related to notions representing so called data views, like, for example «simple view» or «list view». As was already mentioned, data views represent flat data structures used for passing data between the user and Copyright REMICS Consortium Page 23 / 50

24 system internals through the system s UI. The structure of data views is defined by associated «attributes» which are notions of simple types. Notions in these two tiers are usually recovered automatically during the recovery process and they need to be supplemented with notions representing domain concepts. To keep the recommended order in abstraction levels, «concept» notions should be added in the bottom tier of the diagram. Then they need to be related with appropriate sets of attributes as well as with other concepts (also those not shown in a particular diagram) in order to form a complete domain model. Optimizing scenario flows During analysis of the recovered use case scenarios, which is often a prerequisite for performing transformation tasks, it is advisable to optimise some scenario flows to better reflect the legacy system s application logic. One of the ReDSeeDS tool features can be very helpful while doing this. Namely, the tool allows for visualizing textual scenario representation in the form of UML activity diagrams. In the textual representation, scenarios are presented as sequences of numbered sentences in the SVO grammar, interlaced with condition and invocation sentences. A single scenario represents a single story without alternative paths. This representation is most readable for ordinary people like users of the legacy system who are usually reluctant to use any technical notation. Some people, usually analysts and/or migration specialists, prefer precise structure for use case scenarios in a graphical form. Graphical representation shows all scenarios of a single use case (main path and all alternative paths) as one activity diagram. This precisely reflects the flow of control in a use case as a single unit of functional requirement, thus making it easier to analyse. The figure below shows an example of the two alternative representations of the same use case scenarios. Figure 14 : An example of alternative use case scenarios representations Validation techniques While writing a requirement specification, a user can run the validation mechanism at any point and for any part of the specification. If there is any incoherency or any of the RSL rules is broken, the tool will display a listing of all encountered problems. Problems are grouped in three levels of severity: Errors, Copyright REMICS Consortium Page 24 / 50

25 Warnings and Information. Errors are critical problems and they should be corrected immediately in order to make the specification conform to the RSL rules, thus ensuring correct execution of transformation. Problems listed in the Warnings group are not critical problems but it is strongly advisable to correct them. Problems classified as Information are just suggestions as to what can be done to make use of all the features offered by RSL. Figure 15 : Problem listing in the ReDSeeDS tool Transformation techniques The refined and valid RSL model serves as a transformation source. Since the transformation task is performed fully automatically, there are no particular techniques to be applied by the user. The only thing that the user has to do manually is to select the predefined transformation program to be performed by the transformation engine. Also, the user can change some options influencing posttransformation subtasks, if the defaults are not relevant. For example, the user can select an external UML CASE tool into which the resulting model will be exported. The selected tool will also be used to generate code from the exported model. Copyright REMICS Consortium Page 25 / 50

26 4 Migration Phase 4.1 Migration Process The purpose of Migration Phase is apply methods and tools for architecture migration of a legacy application to a Service Oriented Application that might be deployed on service cloud platforms. This phase take as input a Component Model provided by the recovery phase. Using a transformation service provided by the Process management tool, the Component Model is transformed into an Architecture Model that involves the separation of "Data", Behaviour and "Presentation" aspects of application architecture. The Architecture Model is divided into three sub models that use different representation profiles: Behaviour Components are transformed into a SoaML Architecture Model, Data Components into an Entity Model, and Presentation Components into a Presentation Model. It is on this model that most of architecture refactoring rework is carried out. Changes in application architecture, definition of new services, changes to the behaviour of existing services are carried out on the SoaML model. This refactoring is done either manually or through tools such as Composition Pattern tools. The application Data Model can be modified through the SQLDesigner module, a preexisting tool delivered in the commercial version of Modelio(Figure 16). Figure 16 : Migration Phases in details Once the architectural refactoring is completed, the Process Management module offers the possibility to generate the implementation model from the architecture model. As the Implementation model is specific to an execution platform, it is necessary to select this platform before starting the transformation. The current version of the toolkit provides service transformations for the Java EE and.net platforms. The Implementation Model is a model representing the source code of the refactored application. Using pre-existing Modelio tools such as SQL Designer and Java Designer and tools developed in the context of the REMICS project such as Web Designer, it is possible to build an executable application from this model (Figure17). At the Migration layer, the context of the application deployment is not taken into consideration, therefore the generated application is used to test the result of refactoring operations. Copyright REMICS Consortium Page 26 / 50

27 From the architecture model, it is also possible to produce a basic template for the Deployment of the model (deployment phase). 4.2 Tools and Techniques Figure 17 : State Diagram of Migration Phases Model Based Architecture Refactoring This chapter describe tools and techniques used to refactor the architecture of the legacy application into a new cloud ready architecture Architecture Modelling The Architecture Model is used to model the organisation of the applications, services provided, data types exchanged by these services, the data model and presentation aspects. The model is divided into three sub models that use different representation profiles: Entity Model (Data Model) / Persistent profile: The Entity Model is based on the Persistent profile, a pre-existing profile provided by Softeam and dedicated to data modelling and objectrelational mapping. Service Architecture / SoaML profile: The Service Architecture of the application is based on SoaML, an OMG profile that provides a standard way to architect and model SOA solutions using the Unified Modelling Language. Presentation Model / Web Profile: The Presentation Model is based on Basic Web profile, a profile for modelling web aspects of enterprise application and developed by Softeam in the context of the REMICS project. Service Architecture model Input: The Service Architecture model is created from Behaviour Components extracted from the Component Model by a model transformation service provided by the Migration Process tool. Transformations Rules applied: Component Model SoaML Model Description Copyright REMICS Consortium Page 27 / 50

28 Component Participant Provider and/or consumer of services. In the business domain a participant may be a person, organization or system. In the systems domain a participant may be a system, application or component. Interface Service Provider The Provider stereotype specifies an interface and/or a part as playing the role of a provider in a provider/consumer service. Service Interface Service Point Defines the interface to a Service Point or Request Point and is the type of a role in a service contract. The offer of a service by one participant to others using well defined terms, conditions and interfaces. Operation Operation Service operation. Behaviour Activity Diagram Activity Diagrams are used to model operation behaviour. Class (exchange data type) Message Type The specification of information exchanged between service consumers and providers. The Service Architecture model is structured following OMG recommendations relating to the organisation of a SoaML project. Application services are organized around Participants. For each service, we defined a Service Provider and a derived Service Interface. A Participant exposes a public service to others participants using Ports. Data exchanged by service is modelled with Message Types. Entity Model The Entity Architecture model is created from Data Components extracted from the Component Model by a model transformation service provided by the Persistent Profile module. You can refer to the module documentation on the Persistence Profile for more details about this transformation. This model is based on a class diagram containing Entities (Classes), attributes (properties of entities) and relations (Aggregation, Composition) between Entities. Among the properties of entities, some are reported as Identifiers of the entity. Presentation Model The Presentation Architecture Model is created from the Presentation model extracted from the Component Model by a model transformation service provided by the Migration Process tool. In the Recovered Model, the Presentation layer of the application is represented by set of Activity Diagrams structured to represent user interactions. The transformation process is used to extract an MVC model from these diagrams. Transformations Rules applied: Presentation Component Presentation Model Description Activity Page Representing a Web Page with associated data model and controller Copyright REMICS Consortium Page 28 / 50

29 View Model Controller Utility Root for View modelling Data Model associated with view Modelling of behaviour of the view Modelling of utility services used by view CallOperation (Server Side) Controller Operation Behaviour operation ControlFlow (from server to screen) View Transition Modelling transition for a view to another The Recovery Model contains no data on the visual aspect of the user interface; these elements must therefore be modelled manually by the user. Model structure overview: The model is organized around the notion of a page. Each of these pages includes a view, a model, a controller and a utility component. View Package: Modelling the graphical organisation of the page. Model Package: Modelling the Data Model associated with view. The model of a page is represented by a set of typed attributes. Controller Package: Modelling the Behaviour functions called by the view as interface providing operations. Activity Diagrams are used to model the behaviour of operations. Utility Package: Utility services used by the view Design by Pattern Composition Service-oriented computing has a specific set of strategic goals and benefits associated with it. Most of these goals, such as increasing agility, are well known, as is the fact that to attain these goals, there is a need to design solutions by following service orientation, a distinct design approach tailored to support service-oriented computing. For the same reasons, the deployment of an application on a cloud environment means respecting a number of predetermined schemas. The migration phase of a legacy application therefore requires a deep refactoring of the application architecture. In order to facilitate this migration phase, we are building on a design pattern catalogue oriented around the SOA and PIM4Cloud patterns. The methodology developed is based on two main axes, the notion of pattern catalogue and the concept of pattern composition Pattern and patterns catalogue A pattern is a subset of a UML model which addresses a specific problem encountered regularly by architects. For example, how to define a new Service Component for my application? A pattern is applied to existing application architecture and leads to the creation of a new set of UML elements that complement the architecture. A pattern can be configured by the user when applied using several configuration points to be adapted to a specific context. The pattern catalogue contains a set of problem-solving techniques and provides invaluable insights as to how and when those techniques should be used to help us attain design goals. Copyright REMICS Consortium Page 29 / 50

30 SOA Design patterns can be broken down into four types, each of which represents a common scope of implementation: Service Architecture: The architecture of a single service. Service Composition Architecture: The architecture of a set of services assembled by a service composition. Service Inventory Architecture: The architecture that supports a collection of related services that are independently standardised. Service-Oriented Enterprise Architecture: The architecture of the enterprise itself, to whatever extent it is service-oriented Pattern composition Figure 18 : Pattern Catalogue By pattern composition, we mean the methods developed in the REMICS project to integrate a given pattern to an existing architecture and patterns for assembling many of them in order to build the architecture of the target application as a set of interconnected patterns. This method relies on the graphical modelling of patterns: A. In specialised diagrams, unmask existing architecture elements which will be integrated into the pattern. B. Select one or more patterns from a pattern catalogue and add it/them to the diagram. This pattern is represented graphically as a component and has a number of provided and required services. Copyright REMICS Consortium Page 30 / 50

31 C. Model the dependencies between the elements of the existing architecture and the services required and provided by the patterns. It is also possible to establish dependencies between the provided and required services of two patterns. D. Apply the transformation service delivered by the Pattern Composition tool to apply the pattern to the current architecture Executable Model (Code Transformation) Implementation Model The Implementation Model represents the code of the migrated application. This model is specific to the platform of execution and uses a formalism specific to the concerned execution platform. For example, the implementation model of a migrated application in the Java EE platform uses formalism made available by the Java Designer (a solution for Java code generation and reverse from Modelio). From this implementation model, it is possible to generate an executable application. This model is the result of an automatic model transformation from the Architecture Model. The applied transformation depends on the platform execution of the target application, so it is mandatory to choose this platform before carrying out the transformation. The Migration Process module provides a configurable service transformation. The architecture of this tool is designed to facilitate the addition of new target processing. For more details about the transformation engine, see the chapter dedicated to the Migration Process tool. The structure of the Implementation Model depends on the execution platform of the migrated application. Here is an example of an implementation model for the Java target. Figure 19 : Implementation Model Model Transformation applied to Activity Diagrams As part of the REMICS project, and to complete our range of products, SOFTEAM has sought to implement a generic framework for code generation from UML dynamic models (activity diagram, state machine ) and BPMN diagrams. Why develop a specific generation framework for Activity Diagrams? This tool has become necessary for the realisation of transformation from the architecture model to the implementation model. Indeed, at the architecture layer, the behaviour of services is modelled as a UML activity diagram as those exist in the form of code at the implementation layer. Copyright REMICS Consortium Page 31 / 50

32 Figure 20 : From Activity Diagram to Code The activity diagram representing the behaviour of an operation can be complex, with an overlap of conditional processing, looping or other data structure. To generate code from data structures induced by the Loop, Alternatives and other patterns of transitions, it is necessary to transform the activity model into a structured model. In addition, this issue for handling activity diagrams can be generalised to all types of dynamic diagrams. Once these structures have been discovered it becomes trivial to generate code from the model. For this purpose, development of a specific transformation tool is found to be necessary. The objectives of the project can be summarized in three points: 1. Define reference architecture for code generation and reverse from dynamic diagrams. 2. Define a set of generic services to facilitate the implementation of code generators. 3. Decouple aspects of Model Processing" and "Target Generation". The Model Processing aspect combines the functions of graph analysis and structure discovery, and Target Generation addresses issues of code generation to the desired programming language. Architecture of Dynamic Core framework The architecture of the framework is organised around a component integrating a set of generic services dedicated to generating code from UML dynamic diagrams: The Dynamic CORE component. This component, which integrates with the code generators provided by Modelio, includes a generic pivot model for transformations. This generic model is presented as a union of concepts used by the UML dynamic diagrams. Copyright REMICS Consortium Page 32 / 50

33 Figure 21 : Dynamic Core Architecture Dynamic CORE: Component incorporating a set of generic services dedicated to code generation from UML dynamic diagrams. This component includes a generic model used as pivot for transformations. The Dynamic Core provides two extension points: an extension point that enables the registration of a service transformation from a dynamic diagram to a pivot model, and an extension point which enables registration of service code generation from the pivot model. Generic Model: The generic model is a union of concepts used by UML dynamic diagrams. It can be extended by specific components to solve specific business problems. Code Specific components: A component offering specific code generation and reverse services from the pivot model. It handles specific target language generation such as Java, C++, C # or XML. Models Specific components: A component offering model transformation services between specific UML models and the pivot model. This component provides a generic algorithm for extracting a structured model from a directed graph. This algorithm can be applied on any type of UML dynamic model to transform it into a structured pivot model. The code generation process therefore involves the following steps: 1.1. Apply the structure discovering algorithm on the activity model Instantiate a structured model based on the Generic Model meta-model from the activity model Generate target implementation code by browsing the structured model. Structure discovering algorithm The most interesting feature of this tool is the analysis algorithm of the graph which allows discovery of the existing control structures. This algorithm is used by the Model Specific component and is based on an analysis of execution paths of the graph using matrices. In the current tool, this algorithm allows us to create a structured pivot model from a UML dynamic diagram. The algorithm works as follows: A. Discover and store in a matrix all possible execution paths of the input graph. a. Start from a unique element with no incoming transition. b. Browse outgoing transitions and add elements connected in the path. c. Whenever there are two outgoing transitions, duplicate the current execution path and go through the two alternatives. d. If it falls on an item already in the current path, ignore this, as it is a loop. B. Discover data structures from the paths matrix. a. Browse the columns of path matrix. b. Divide the matrix into three sub tables. i. The prefix sub matrix: It contains columns whose elements are identical to the beginning of the table. ii. The variable sub matrix: It contains columns whose elements are different. iii. The suffix sub matrix: It contains columns whose elements are identical to the Copyright REMICS Consortium Page 33 / 50

34 end of the table. c. If there is at least one prefix or suffix sub matrix: The variable sub matrix represents an alternative to the graph. d. Otherwise distribute the matrix rows into subgroups until the discovery prefix or suffix (or until each group reduces to a single line). Each sub matrix then corresponds to an alternative. C. Reapply the previous algorithm to the identified sub matrix. D. Discovering Loops. a. For each sub matrix, if there is a transition starting from a member of the row and ending towards a predecessor element of the same row, there is a loop between the two elements. Copyright REMICS Consortium Page 34 / 50

35 5 Deployment Phase 5.1 Deployment Process The purpose of the Deployment Phase is to apply methods and tools to manage deployment of a migrated application to Cloud Computing architecture. This process is based on the use of the PIM4Cloud profile, an extension to the OMG's SoaML, which addresses the specificities of SaaS application modelling at the platform independent level. Modelling the application deployment platform is an independent process which does not depend directly on the work of the migration phase. However, it is necessary to integrate the components identified during the implementation phases of migration to model deployment. The deployment process (Figure 22) can start independently or else be initialised from the architecture model by a model transformation. In the latter case, the model would be initialised with the deployment units provided by the Architecture Model. Deploying an application on the cloud leads also to manage a number of constraints related especially to the distributed aspect of the application architecture and to the necessity to manage the communication between these independent components. Figure 22 : Deployment Phases in details The Deployment Phase starts with the monetisation of the Deployment Model of the migrated application using the PIM4Cloud metamodel. This task includes the modelling of the software infrastructure required for deploying the application (Exploitation System, Framework, and Application Component) and the modelling of the Cloud Resources used to deploy this infrastructure. Starting from this model, a configuration file in JSON format is then generated. At the same time, the source code of the migrated application components (generated from the implementation model) are packaged in a format suitable for deployment. The configuration file is then used by a specialised tool to proceed to application deployment. To do this, it will manage the booking of cloud resources, the deployment of applicative components on this resources and the configuration of the remote connection between this components. 5.2 Tools and Techniques Deployment Modelling The Deployment Model is an implementation of the PIM4Cloud profile performed under the Modelio PIM4Cloud profile, and addresses three issues related to deploying an application on the Cloud: Copyright REMICS Consortium Page 35 / 50

36 modelling the technical architecture of the application, modelling the deployment of this technical architecture on public and private cloud computing platforms, and modelling the physical infrastructure used in conjunction with cloud computing platforms. As explained above, the deployment model is implemented in parallel with the architecture model. It is not the result of the recovery process and therefore it cannot be created by model transformation from previous models. To facilitate the modelling of deployment, we extended the catalogue provided by Pattern Composition tools, a group of templates dedicated to PIM4Cloud profile templates. This model is structured in two separate packages addressing the Application domain and the Cloud Provider domain: The Application Domain (Figure 23) allows the modelling of the software infrastructure required for deploying and managing a particular type of architecture component. This domain will be used for example to model the application framework or operating systems which are deployed with the applications. Figure 23 : Deployment Metamodel Deployment Architecture Copyright REMICS Consortium Page 36 / 50

37 The Cloud Provider (Figure 24) domain allows us to model and configure services provided by a cloud computing platform. Figure 24 : Deployment Metamodel - Resource Providing Cloud Resource Provisioning The mechanism to provision cloud resources considers as input a CloudML deployment model which is a Cloud Provider-Independent Model (CPIM) (i.e., model that describes cloud concerns related to the deployment of the application in a cloud agnostic way). This provisioning mechanism aims at provisioning all the resources depicted in this model on the cloud. In a CloudML deployment model, each cloud resource to be provisioned is identified by a node instance whose characteristics are specified in its node type. Typically, a node type represents a generic virtual machine (e.g., a virtual machine running GNU/Linux). This element can be parameterised by provisioning requirements (e.g., 2 cores <= compute <= 4 cores, 2 GiB <= memory <= 4 GiB, storage <= 10 GiB, location = Europe) as depicted in the SmallGNULinux node type defined in Listing 1. "nodetypes" : [ { "id" : "SmallGNULinux", "os" : "GNULinux", "compute" : [2, 4], "memory" : [2048, 4096], "storage" : [10240], "location" : "eu", "sshkey" : "smallgnulinux", "securitygroup" : "dome", "groupname" : "smalllinux", "privatekey" : "YOUR KEY", "provides" : [ { "id" : "SSH" } ] } ] Listing 1 - An example of a node type from a CPIM in JSON format The first step of the provisioning process consists of the specification of the provider on which the node instances will be deployed (e.g., the virtual machine running GNU/Linux called SmallGNULinux Copyright REMICS Consortium Page 37 / 50

38 will be provisioned on Amazon EC2). A request is then sent to the provisioning and deployment engine for details on virtual machines available from this provider according to the constraints defining its type (see listing 1). As a result, the engine will interact with the provider in order to retrieve this information before updating the metadata associated to the instance (e.g., t1.micro instance in euwest-1 location) as depicted in Figure 25. This process transforms our deployment model into a Cloud Provider-Specific Model (CPSM). Figure 25 : Example of CPSM derivation Once completed, the initial provisioning and deployment process can be triggered by interacting directly with the models@run-time environment. Models@run-time [MorinBaraisJFS09,BlairBencomoF09] is an architectural pattern for dynamic adaptive systems that proposes to leverage models during their execution. In particular, models@runtime provides an abstract representation of the underlying running system, which facilitates reasoning, simulation and enactment of adaptation actions. A change in the running system is automatically reflected in the model of the current system. Similarly, any modification applied to this model can be enacted on the running system, on demand. Typically a models@runtime engine pursues the following process, first the current model of the system is provided by the models@run-time environment. The current model can then be consumed by a reasoning system that produces a target model. The target model may undergo a validation process before enacting the adaptation. If passed, the current model and the target model of the system are compared. This way, it is possible to identify the parts of the system that require adaptation. The adaptation is then enacted by the adaptation engine and the target model becomes the current model. Within the CloudML models@runtime architecture, one of the objectives of the adaptation mechanism aims is to enact the provisioning of cloud resources. For each node instance defined in the providerspecific deployment model, this mechanism provisions the corresponding virtual machine and starts it. This mechanism is built on top of the jclouds library which supports more than 20 providers. Once the actual provisioning achieved, the in memory model of the running system is enriched with runtime metadata such as its public IP address, private IP address etc. as described in Figure 26. Copyright REMICS Consortium Page 38 / 50

39 Figure 26 : Example of CPSM runtime enrichment Automating Deployment of Cloud Application The mechanism to deploy applications on the cloud considers as input a CloudML deployment model which is a Cloud Provider-Independent Model (CPIM) (i.e., model that describes cloud concerns related to the deployment of the application in a cloud agnostic way). This mechanism aims at installing, configuring and starting application on cloud resources. In a CloudML deployment model, an application is identified by an artefact instance whose characteristics are specified in an artefact type. An artefact type represents a generic component of the system (e.g., a derby database) to be deployed on a virtual machine. It can be associated to Resources (e.g., binary files, configuration files, shell scripts) as depicted in Listing 2. { "name" : "derby", "resource" : { "name" : " derby ", "retrievingcommand" : "wget "deployingcommand" : "sudo install_derby.sh" }, "provides" : [{ "name" : " derbyport", "isremote" : true, "portnumber" : "0" } ] } Listing 2- Examples of artefact types from a CPIM in JSON format A resource can be annotated with commands for each state of the deployment's life-cycle of the associated artefact. This life-cycle is presented in Figure 27. Copyright REMICS Consortium Page 39 / 50

40 Figure 27 : Life-cycle of an application As the provisioning mechanism, the enactment of the deployment of an application is one of the objectives of the adaptation mechanism of the models@runtime engine (cf. Section 5.2.2). This deployment process is based on the life-cycle of an application. Typically, it consists in triggering on the cloud resource on which the application should be deployed the commands described in the model to: 1. Retrieve the resources (e.g., download the Tomcat package) 2. Install the application (e.g., install a Tomcat war container) 3. Configure the application (e.g., configure the Tomcat for the application to be deployed on it) 4. Start the application (e.g., start the Tomcat server) An application may communicate with another application once deployed. These interactions are specified in the CloudML deployment models by bindings between artefact instances. To each binding can be associated a resource with commands describing how to configure the two applications to enable these interactions. These commands are triggered during the configuration process of the applications. An application may also require another software artefact in order to run; these dependencies are also specified in the CloudML deployment models by mandatories bindings. As a consequence, before deploying an application, all its dependencies are deployed first. Copyright REMICS Consortium Page 40 / 50

41 5.2.4 OLAP-OLTP Principles While the modernization process of a legacy application to the cloud may be possible by transforming and adapting its structure to match a SOA/Cloud pattern, there are several post migration considerations that must be taken into account, before exploiting the cloud properties (e.g. elasticity) of a migrated system. For example, components distribution, data persistence, etc Modernization of OLTP and OLAP Data persistence of legacy systems is generally built on top of two database models, one transactional (OLTP) and the other analytical (OLAP). An OLTP (Online-Transaction Processing) system is characterized by providing support to handle a high load of short-term transactions (e.g. INSERT, UPDATE) that are stored within a particular schema (aka relational model). This kind of system also includes the mechanisms to maintain the consistency and integrity of data when it s shared in a multiaccess environment (ACID). Generally, the efficiency-based performance of an OLTP database is measured by the number of transactions that can be handled in a specific period of time (usually transactions/second). In contrast, an OLAP (Online-Analytical Processing) system focuses on the analysis of big volumes of historical data (usually transactional). The aim of the analysis is to find trends, relations or relevant information (applying data mining techniques) that can help in the decision-making processes of a business. Unlike OLTP, the load of OLAP transactions is relatively low (query-based). However, the structure of queries is complex as it involves joining multidimensional representations of data. Consequently, the response time of an OLAP query tends to require a considerable amount of time and intensive CPU processing. The migration of a database to the cloud is not a trivial task when the goal does not include upgrading its DBMS. A database can easily be implemented / configured on the top of a virtualised OS that fits its operational requirements in software and hardware. In this context, migration implies only the replication of a system from one technology to other. However, the migration issues of a database arise when it is expected that the database can adapt its functionality to different workload conditions by using cloud properties (e.g. horizontal scaling). The workload of a database depends on its type (e.g. TPS for OLTP). Consequently, different aspects need to be considered for scaling the capacity of a database. The workload of an OLTP database may be overcome by 1) augmenting underlying hardware capabilities in order to increase the performance for handling major number of transactions or 2) distributing the load among multiple working database nodes. The workload of an OLAP database may be countered by decreasing the number of I/O reads to disk, which is achieved by augmenting CPU properties such as available memory, etc Increasing performance of an OLTP database for handling dynamic workloads Even though, the performance of a database needs to be modelled during the entire lifecycle of a system in order to be efficiently adapted to face dynamic workload conditions [ 1 ], a migrated system may quantify the performance of its database by characterising the type of its OLTP transactions. The characterisation happens by monitoring the use of the resources used in the transactional process of a database (figure 28 shows the key components to be analysed). Monitoring performance tools are discussed in detail in deliverable D.6.6 [ 2 ] and a case study of OLTP migration (based on MediaWiki) is presented in D.6.7 [ 3 ]. Copyright REMICS Consortium Page 41 / 50

42 Figure 28 : Core components to be monitored for scaling an OLTP system in a cloud instance The aim of monitoring the service demand of each resource is to identify possible bottlenecks, which can be replaced on the fly using dynamic cloud-allocation mechanisms. For example, storage volumes (aka hard drives) in Amazon EBS can be attached dynamically to an instance in order to deliver high performance for I/O intensive workloads [ 4 ]. The main key is to build a balanced system without bottlenecks Distribution of load for an OLTP database system in a distributed environment A database system running in the cloud may scale out as the transactional workload grows or shrinks. Horizontal scaling consists in transforming a single node setup into a multi-node configuration, in which the load can be distributed using a load balancer (LB). In a multi-node configuration (Figure 29 shows a basic scale out configuration) each back-end node is considered a commodity server (highly decoupled) that can be added or removed on the fly, in order to enhance the performance of the system, and the availability and reliability of the data. Copyright REMICS Consortium Page 42 / 50

43 However, scaling an OLTP system horizontally may be unfeasible in some cases as the logic of the application built on the top may require specific access to the data. For instance, a Web application that involves the utilization of sessions cannot distribute the transactions uniformly among the backend database nodes. This issue is due to the fact that a session is created and maintained in a specific server (the first server that handles the transaction) and additional mechanisms are needed to transfer the session among the back-end nodes. Figure 29 : A basic scale out configuration of an OLTP system with uniform distribution of transactions To counter the issues of storing client states in a distributed cluster, many solutions can be considered such as: Using cookies instead of sessions Sharing the sessions in a shared folder within the cluster Adding a centralised mem-cache or an extra database to store the sessions Implementing stickiness at the LB level However, in a cloud environment, there are certain issues while adding or removing servers from an existing pool. For example, a server should not be removed from the pool if it contains sessions or active users. Thus, approaches that allow tracking sessions with minimal modifications in infrastructure, such as sticky sessions, are encouraged. Stickiness is a technique in which the client state is created and maintained at the LB level (using cookies) so that when a transaction arrives at the LB from a specific user, the LB knows where to reroute the transaction (Figure 30 shows a basic scale out configuration of an OLTP system with sticky sessions). However, the main problem with sticky sessions is that the scaling properties of the cloud, achieved by equally distributing the load among the available back-end nodes, gradually disappear. This is due to the fact that the time of sessions is not uniform. Thus, a fair distribution of load among servers may not be possible. Copyright REMICS Consortium Page 43 / 50

44 Figure 30 : A basic scale out configuration of an OLTP system with sticky sessions Augmenting the performance of an OLAP database for handling complex queries Since the purpose of an OLAP database is more oriented to the analysis of data, the workload of an OLAP system depends on the use of computational resources such as CPU, memory, disks, etc. Consequently, vertical scaling may be used in order to augment its performance for solving complex queries. Vertical scaling consists of replacing the low-computational capabilities of a server with higher ones at runtime, as shown in Figure 31. Figure 31 : Augmenting computational capabilities to improve the performance of an OLAP system when facing intensive data processing Enabling the horizontal scaling of migrated OLTP applications To enable the migrated Online-Transaction Processing (OLTP) applications to use the advantages of Cloud (such as elasticity and on-demand resource provisioning in real-time), they must be able to scale horizontally. For a typical OLTP application, this involves separating the user interface, the application logic and the datastore and deploying one or more of these in a distributed and scalable manner. As a result of remodelling the legacy application through the REMICS toolchain, we already have a model of all the software components of the resulting applications. To enable horizontal Copyright REMICS Consortium Page 44 / 50

45 scaling, it is necessary to remodel the original deployment model of the legacy application, configure the separated components to interact remotely and introduce automatic scaling to the deployed system. To illustrate this process, we will apply it to one of the use cases of this project: Dome. Figure 32 illustrates the basic deployment model of this use case, which was not deployed in a scalable fashion before the migration process. Figure 32 : Illustration of the original deployment model of the migrated Dome pilot case The first step is to separate the database and application logic components and configure them to deploy on two separate cloud instances as shown on Figure 33. Figure 33 : Software components separated between two cloud instances Copyright REMICS Consortium Page 45 / 50

46 Once the components are separated, it must be decided which of the nodes are to be scaled in the cloud. In the Dome case, it was chosen to be the application logic node (Dome Behaviour) only and the database (Dome Data) was kept as a single node. To deal with multiple application servers, a load balancer is introduced, which divides user requests between them. The choice of the load balancer and how to use it is described in more detail in the deliverable D6.7 - Performance testing of cloud applications. Figure 34 illustrates the deployment model after replicating application servers and introducing a load balancer node. Figure 34 : Modified deployment with the replicated dome behaviour node and added load balancer Copyright REMICS Consortium Page 46 / 50

47 The next step is to introduce an automatic scaler component. While it is possible to use existing cloud services (like Amazon Auto Scaling, as described in deliverable D6.7 - Performance testing of cloud applications) for this, it would mean vendor lock-in to the specific service provider. To be able to reuse the automating deployment of cloud applications provided functionality of the CloudML tool and to integrate it better with the whole REMICS tool chain, a new software component was created which could be automatically introduced to an existing CloudML deployment model, as long as the model already includes a load balancer as shown in figure 35 and a Node Instance (in this case Dome Behaviour) that can be added or removed on the fly. Removing or adding a new Node Instance is supported by CloudML, as it provides a means of modifying an already running deployment by changing the original model and redeploying it, and also a means of reconfiguring the existing software components upon such an event. Figure 35 : Performance monitoring and autoscaling logic introduced to the deployment Figure 35 illustrates the final model where the standalone autoscaler has been introduced. It also includes software components for measuring the performance of the running deployment (CollectD) for the autoscaler, the RRDtool for generating graphs and the Apache web server for displaying them through a web interface. The performance measurements collector and the autoscaling components are located on a separate cloud instance to avoid affecting the performance of running applications and to scale better with a higher number of cloud instances. Performance monitoring results are used by the autoscaler, but they are also published through the web interface on the collection node or can be downloaded from the collector as a diagram or raw data. The autoscaler component uses an XML configuration file as input which specifies the following information about deployment execution: For each node type: o name The name of the node type in CloudML model o minservers - The minimum number of instances for this node type o maxservers - The maximum number of instances for this node type o amounttoscaledown - How many instances to scale down at once o amounttoscaleup - How many instances to scale up at once Copyright REMICS Consortium Page 47 / 50

Deliverable D4.2. SHAPE MDE Toolset User s Guide

Deliverable D4.2. SHAPE MDE Toolset User s Guide Service and Software Architectures, Infrastructures and Engineering Small or Medium-scale Focused Research Project Semantically-enabled Heterogeneous Service Architecture and Platforms Engineering Acronym

More information

Executive Summary. Round Trip Engineering of Space Systems. Change Log. Executive Summary. Visas

Executive Summary. Round Trip Engineering of Space Systems. Change Log. Executive Summary. Visas Reference: egos-stu-rts-rp-1002 Page 1/7 Authors: Andrey Sadovykh (SOFTEAM) Contributors: Tom Ritter, Andreas Hoffmann, Jürgen Großmann (FHG), Alexander Vankov, Oleg Estekhin (GTI6) Visas Surname - Name

More information

1 Executive Overview The Benefits and Objectives of BPDM

1 Executive Overview The Benefits and Objectives of BPDM 1 Executive Overview The Benefits and Objectives of BPDM This is an excerpt from the Final Submission BPDM document posted to OMG members on November 13 th 2006. The full version of the specification will

More information

The Open Group SOA Ontology Technical Standard. Clive Hatton

The Open Group SOA Ontology Technical Standard. Clive Hatton The Open Group SOA Ontology Technical Standard Clive Hatton The Open Group Releases SOA Ontology Standard To Increase SOA Adoption and Success Rates Ontology Fosters Common Understanding of SOA Concepts

More information

QoS-aware model-driven SOA using SoaML

QoS-aware model-driven SOA using SoaML QoS-aware model-driven SOA using SoaML Niels Schot A thesis submitted for the degree of MSc Computer Science University of Twente EEMCS - TRESE: Software Engineering Group Examination committee: Luís Ferreira

More information

Accelerate Your Enterprise Private Cloud Initiative

Accelerate Your Enterprise Private Cloud Initiative Cisco Cloud Comprehensive, enterprise cloud enablement services help you realize a secure, agile, and highly automated infrastructure-as-a-service (IaaS) environment for cost-effective, rapid IT service

More information

Computation Independent Model (CIM): Platform Independent Model (PIM): Platform Specific Model (PSM): Implementation Specific Model (ISM):

Computation Independent Model (CIM): Platform Independent Model (PIM): Platform Specific Model (PSM): Implementation Specific Model (ISM): viii Preface The software industry has evolved to tackle new approaches aligned with the Internet, object-orientation, distributed components and new platforms. However, the majority of the large information

More information

MOMOCS D2.1 XIRUP S UPPORTING T OOLS R EQUIREMENTS. Model driven Modernisation of Complex Systems. Dissemination Level: Work package:

MOMOCS D2.1 XIRUP S UPPORTING T OOLS R EQUIREMENTS. Model driven Modernisation of Complex Systems. Dissemination Level: Work package: MOMOCS Model driven Modernisation of Complex Systems D2.1 XIRUP S UPPORTING T OOLS R EQUIREMENTS Dissemination Level: Work package: Lead Participant: Public WP2 ATOS Contractual Delivery Date: January

More information

We manage the technology that lets you manage your business.

We manage the technology that lets you manage your business. We manage the technology that lets you manage your. Stages of Legacy Modernization Metadata enablement of a four-stage approach end-to-end Modernization Stages of Legacy Modernization The speed of technology

More information

Metadata Framework for Resource Discovery

Metadata Framework for Resource Discovery Submitted by: Metadata Strategy Catalytic Initiative 2006-05-01 Page 1 Section 1 Metadata Framework for Resource Discovery Overview We must find new ways to organize and describe our extraordinary information

More information

Klocwork Architecture Excavation Methodology. Nikolai Mansurov Chief Scientist & Architect

Klocwork Architecture Excavation Methodology. Nikolai Mansurov Chief Scientist & Architect Klocwork Architecture Excavation Methodology Nikolai Mansurov Chief Scientist & Architect Overview! Introduction Production of software is evolutionary and involves multiple releases Evolution of existing

More information

<Insert Picture Here> Enterprise Data Management using Grid Technology

<Insert Picture Here> Enterprise Data Management using Grid Technology Enterprise Data using Grid Technology Kriangsak Tiawsirisup Sales Consulting Manager Oracle Corporation (Thailand) 3 Related Data Centre Trends. Service Oriented Architecture Flexibility

More information

Developing Web-Based Applications Using Model Driven Architecture and Domain Specific Languages

Developing Web-Based Applications Using Model Driven Architecture and Domain Specific Languages Proceedings of the 8 th International Conference on Applied Informatics Eger, Hungary, January 27 30, 2010. Vol. 2. pp. 287 293. Developing Web-Based Applications Using Model Driven Architecture and Domain

More information

MDSE USE CASES. Chapter #3

MDSE USE CASES. Chapter #3 Chapter #3 MDSE USE CASES Teaching material for the book Model-Driven Software Engineering in Practice by Morgan & Claypool, USA, 2012. www.mdse-book.com MDSE GOES FAR BEYOND CODE-GENERATION www.mdse-book.com

More information

Sentinet for BizTalk Server SENTINET

Sentinet for BizTalk Server SENTINET Sentinet for BizTalk Server SENTINET Sentinet for BizTalk Server 1 Contents Introduction... 2 Sentinet Benefits... 3 SOA and API Repository... 4 Security... 4 Mediation and Virtualization... 5 Authentication

More information

CoE CENTRE of EXCELLENCE ON DATA WAREHOUSING

CoE CENTRE of EXCELLENCE ON DATA WAREHOUSING in partnership with Overall handbook to set up a S-DWH CoE: Deliverable: 4.6 Version: 3.1 Date: 3 November 2017 CoE CENTRE of EXCELLENCE ON DATA WAREHOUSING Handbook to set up a S-DWH 1 version 2.1 / 4

More information

Universal Model Framework -- An Introduction

Universal Model Framework -- An Introduction Universal Model Framework -- An Introduction By Visible Systems Corporation www.visible.com This document provides an introductory description of the Universal Model Framework an overview of its construct

More information

Appendix A - Glossary(of OO software term s)

Appendix A - Glossary(of OO software term s) Appendix A - Glossary(of OO software term s) Abstract Class A class that does not supply an implementation for its entire interface, and so consequently, cannot be instantiated. ActiveX Microsoft s component

More information

Variability Implementation Techniques for Platforms and Services (Interim)

Variability Implementation Techniques for Platforms and Services (Interim) Engineering Virtual Domain-Specific Service Platforms Specific Targeted Research Project: FP7-ICT-2009-5 / 257483 Variability Implementation Techniques for Platforms and Services (Interim) Abstract Creating

More information

Introduction to IRQA 4

Introduction to IRQA 4 Introduction to IRQA 4 Main functionality and use Marcel Overeem 1/7/2011 Marcel Overeem is consultant at SpeedSoft BV and has written this document to provide a short overview of the main functionality

More information

RECOVERY AND MIGRATION OF APPLICATION LOGIC FROM LEGACY SYSTEMS

RECOVERY AND MIGRATION OF APPLICATION LOGIC FROM LEGACY SYSTEMS Computer Science 13 (4) 2012 http://dx.doi.org/10.7494/csci.2012.13.4.53 Wiktor Nowakowski Michał Śmiałek Albert Ambroziewicz Norbert Jarzębowski Tomasz Straszak RECOVERY AND MIGRATION OF APPLICATION LOGIC

More information

What s a BA to do with Data? Discover and define standard data elements in business terms

What s a BA to do with Data? Discover and define standard data elements in business terms What s a BA to do with Data? Discover and define standard data elements in business terms Susan Block, Lead Business Systems Analyst The Vanguard Group Discussion Points Discovering Business Data The Data

More information

Knowledge Discovery: How to Reverse-Engineer Legacy Systems

Knowledge Discovery: How to Reverse-Engineer Legacy Systems Knowledge Discovery: How to Reverse-Engineer Legacy Systems Hugo Bruneliere, Frédéric Madiot INRIA & MIA-Software 1 Context of this work Knowledge Discovery: How To Reverse-Engineer Legacy Sytems The present

More information

Udaipur, Rajasthan, India. University, Udaipur, Rajasthan, India

Udaipur, Rajasthan, India. University, Udaipur, Rajasthan, India ROLE OF NETWORK VIRTUALIZATION IN CLOUD COMPUTING AND NETWORK CONVERGENCE 1 SHAIKH ABDUL AZEEM, 2 SATYENDRA KUMAR SHARMA 1 Research Scholar, Department of Computer Science, Pacific Academy of Higher Education

More information

DSM (Dependency/Design Structure Matrix)

DSM (Dependency/Design Structure Matrix) DSM (Dependency/Design Structure Matrix) L T JayPrakash Courtsey: DSMweb.org Organization of the Talk 1 Different DSM Types How to Read a DSM Building and Creating a DSM Hands-on Exercises Operations on

More information

How to Harvest Reusable Components in Existing Software. Nikolai Mansurov Chief Scientist & Architect

How to Harvest Reusable Components in Existing Software. Nikolai Mansurov Chief Scientist & Architect How to Harvest Reusable Components in Existing Software Nikolai Mansurov Chief Scientist & Architect Overview Introduction Reuse, Architecture and MDA Option Analysis for Reengineering (OAR) Architecture

More information

challenges in domain-specific modeling raphaël mannadiar august 27, 2009

challenges in domain-specific modeling raphaël mannadiar august 27, 2009 challenges in domain-specific modeling raphaël mannadiar august 27, 2009 raphaël mannadiar challenges in domain-specific modeling 1/59 outline 1 introduction 2 approaches 3 debugging and simulation 4 differencing

More information

Three Key Considerations for Your Public Cloud Infrastructure Strategy

Three Key Considerations for Your Public Cloud Infrastructure Strategy GOING PUBLIC: Three Key Considerations for Your Public Cloud Infrastructure Strategy Steve Follin ISG WHITE PAPER 2018 Information Services Group, Inc. All Rights Reserved The Market Reality The race to

More information

Whole Platform Foundation. The Long Way Toward Language Oriented Programming

Whole Platform Foundation. The Long Way Toward Language Oriented Programming Whole Platform Foundation The Long Way Toward Language Oriented Programming 2008 by Riccardo Solmi made available under the Creative Commons License last updated 22 October 2008 Outline Aim: Engineering

More information

Can precise requirements models drive software case reuse?

Can precise requirements models drive software case reuse? Can precise requirements models drive software case reuse? Albert Ambroziewicz, Jacek Bojarski, Wiktor Nowakowski, and Tomasz Straszak Warsaw University of Technology, Warsaw, Poland {ambrozia,bojarsj1,nowakoww,straszat}@iem.pw.edu.pl

More information

New Features Summary PowerDesigner 15.2

New Features Summary PowerDesigner 15.2 New Features Summary PowerDesigner 15.2 Windows DOCUMENT ID: DC10077-01-1520-01 LAST REVISED: February 2010 Copyright 2010 by Sybase, Inc. All rights reserved. This publication pertains to Sybase software

More information

Qlik Sense Desktop. Data, Discovery, Collaboration in minutes. Qlik Sense Desktop. Qlik Associative Model. Get Started for Free

Qlik Sense Desktop. Data, Discovery, Collaboration in minutes. Qlik Sense Desktop. Qlik Associative Model. Get Started for Free Qlik Sense Desktop Data, Discovery, Collaboration in minutes With Qlik Sense Desktop making business decisions becomes faster, easier, and more collaborative than ever. Qlik Sense Desktop puts rapid analytics

More information

BLU AGE 2009 Edition Agile Model Transformation

BLU AGE 2009 Edition Agile Model Transformation BLU AGE 2009 Edition Agile Model Transformation Model Driven Modernization for Legacy Systems 1 2009 NETFECTIVE TECHNOLOGY -ne peut être copiésans BLU AGE Agile Model Transformation Agenda Model transformation

More information

WHAT IS SOFTWARE ARCHITECTURE?

WHAT IS SOFTWARE ARCHITECTURE? WHAT IS SOFTWARE ARCHITECTURE? Chapter Outline What Software Architecture Is and What It Isn t Architectural Structures and Views Architectural Patterns What Makes a Good Architecture? Summary 1 What is

More information

SOFTWARE ARCHITECTURE & DESIGN INTRODUCTION

SOFTWARE ARCHITECTURE & DESIGN INTRODUCTION SOFTWARE ARCHITECTURE & DESIGN INTRODUCTION http://www.tutorialspoint.com/software_architecture_design/introduction.htm Copyright tutorialspoint.com The architecture of a system describes its major components,

More information

innoq Deutschland GmbH innoq Schweiz GmbH D Ratingen CH-6330 Cham Tel Tel

innoq Deutschland GmbH innoq Schweiz GmbH D Ratingen CH-6330 Cham Tel Tel innoq Deutschland GmbH innoq Schweiz GmbH D-40880 Ratingen CH-6330 Cham Tel +49 2102 77 1620 Tel +41 41 743 01 11 www.innoq.com Stefan Tilkov, stefan.tilkov@innoq.com 1 Goals Introduce MDE, MDA, MDD, MDSD,...

More information

SOFTWARE testing is one of the main steps of each development

SOFTWARE testing is one of the main steps of each development Proceedings of the 2014 Federated Conference on Computer Science and Information Systems pp. 1569 1574 DOI: 10.15439/2014F342 ACSIS, Vol. 2 Automating Acceptance Testing with tool support Tomasz Straszak,

More information

Software Development Methodologies

Software Development Methodologies Software Development Methodologies Lecturer: Raman Ramsin Lecture 7 Integrated Object-Oriented Methodologies: OPEN and FOOM 1 Object-oriented Process, Environment and Notation (OPEN) First introduced in

More information

Developing Software Applications Using Middleware Infrastructure: Role Based and Coordination Component Framework Approach

Developing Software Applications Using Middleware Infrastructure: Role Based and Coordination Component Framework Approach Developing Software Applications Using Middleware Infrastructure: Role Based and Coordination Component Framework Approach Ninat Wanapan and Somnuk Keretho Department of Computer Engineering, Kasetsart

More information

Introduction to MDE and Model Transformation

Introduction to MDE and Model Transformation Vlad Acretoaie Department of Applied Mathematics and Computer Science Technical University of Denmark rvac@dtu.dk DTU Course 02291 System Integration Vlad Acretoaie Department of Applied Mathematics and

More information

1. Introduction. 2. Technology concepts

1. Introduction. 2. Technology concepts 1 Table of Contents 1. Introduction...2 2. Technology Concepts...3 2.1. Sharding...4 2.2. Service Oriented Data Architecture...4 2.3. Aspect Oriented Programming...4 3. Technology/Platform-Specific Features...5

More information

3.4 Data-Centric workflow

3.4 Data-Centric workflow 3.4 Data-Centric workflow One of the most important activities in a S-DWH environment is represented by data integration of different and heterogeneous sources. The process of extract, transform, and load

More information

AOSA - Betriebssystemkomponenten und der Aspektmoderatoransatz

AOSA - Betriebssystemkomponenten und der Aspektmoderatoransatz AOSA - Betriebssystemkomponenten und der Aspektmoderatoransatz Results obtained by researchers in the aspect-oriented programming are promoting the aim to export these ideas to whole software development

More information

IBM Rational Software Architect

IBM Rational Software Architect Unifying all aspects of software design and development IBM Rational Software Architect A complete design & development toolset Incorporates all the capabilities in IBM Rational Application Developer for

More information

NooJ Graphical User Interfaces Modernization

NooJ Graphical User Interfaces Modernization NooJ Graphical User Interfaces Modernization Z. Gotti, S. Mbarki, S. Gotti and N. Laaz MISC Laboratory, Faculty of Science, Ibn Tofail University Kenitra, MOROCCO Plan Introduction Context Contribution

More information

SCOS-2000 Technical Note

SCOS-2000 Technical Note SCOS-2000 Technical Note MDA Study Prototyping Technical Note Document Reference: Document Status: Issue 1.0 Prepared By: Eugenio Zanatta MDA Study Prototyping Page: 2 Action Name Date Signature Prepared

More information

Automatic Reconstruction of the Underlying Interaction Design of Web Applications

Automatic Reconstruction of the Underlying Interaction Design of Web Applications Automatic Reconstruction of the Underlying Interaction Design of Web Applications L.Paganelli, F.Paternò C.N.R., Pisa Via G.Moruzzi 1 {laila.paganelli, fabio.paterno}@cnuce.cnr.it ABSTRACT In this paper

More information

FIVE BEST PRACTICES FOR ENSURING A SUCCESSFUL SQL SERVER MIGRATION

FIVE BEST PRACTICES FOR ENSURING A SUCCESSFUL SQL SERVER MIGRATION FIVE BEST PRACTICES FOR ENSURING A SUCCESSFUL SQL SERVER MIGRATION The process of planning and executing SQL Server migrations can be complex and risk-prone. This is a case where the right approach and

More information

Sequence Diagram Generation with Model Transformation Technology

Sequence Diagram Generation with Model Transformation Technology , March 12-14, 2014, Hong Kong Sequence Diagram Generation with Model Transformation Technology Photchana Sawprakhon, Yachai Limpiyakorn Abstract Creating Sequence diagrams with UML tools can be incomplete,

More information

10 Steps to Building an Architecture for Space Surveillance Projects. Eric A. Barnhart, M.S.

10 Steps to Building an Architecture for Space Surveillance Projects. Eric A. Barnhart, M.S. 10 Steps to Building an Architecture for Space Surveillance Projects Eric A. Barnhart, M.S. Eric.Barnhart@harris.com Howard D. Gans, Ph.D. Howard.Gans@harris.com Harris Corporation, Space and Intelligence

More information

for TOGAF Practitioners Hands-on training to deliver an Architecture Project using the TOGAF Architecture Development Method

for TOGAF Practitioners Hands-on training to deliver an Architecture Project using the TOGAF Architecture Development Method Course Syllabus for 3 days Expert led Enterprise Architect hands-on training "An Architect, in the subtlest application of the word, describes one able to engage and arrange all elements of an environment

More information

Ektron to EPiServer Digital Experience Cloud Content Transfer Guide

Ektron to EPiServer Digital Experience Cloud Content Transfer Guide Ektron to EPiServer Digital Experience Cloud Content Transfer Guide This document is intended for review and use by Sr. Developers, CMS Architects, and other senior development staff to aide in the process

More information

CA Test Data Manager Key Scenarios

CA Test Data Manager Key Scenarios WHITE PAPER APRIL 2016 CA Test Data Manager Key Scenarios Generate and secure all the data needed for rigorous testing, and provision it to highly distributed teams on demand. Muhammad Arif Application

More information

API, DEVOPS & MICROSERVICES

API, DEVOPS & MICROSERVICES API, DEVOPS & MICROSERVICES RAPID. OPEN. SECURE. INNOVATION TOUR 2018 April 26 Singapore 1 2018 Software AG. All rights reserved. For internal use only THE NEW ARCHITECTURAL PARADIGM Microservices Containers

More information

Microsoft SQL Server Training Course Catalogue. Learning Solutions

Microsoft SQL Server Training Course Catalogue. Learning Solutions Training Course Catalogue Learning Solutions Querying SQL Server 2000 with Transact-SQL Course No: MS2071 Two days Instructor-led-Classroom 2000 The goal of this course is to provide students with the

More information

Designing a System Engineering Environment in a structured way

Designing a System Engineering Environment in a structured way Designing a System Engineering Environment in a structured way Anna Todino Ivo Viglietti Bruno Tranchero Leonardo-Finmeccanica Aircraft Division Torino, Italy Copyright held by the authors. Rubén de Juan

More information

IRMOS Newsletter. Issue N 5 / January Editorial. In this issue... Dear Reader, Editorial p.1

IRMOS Newsletter. Issue N 5 / January Editorial. In this issue... Dear Reader, Editorial p.1 IRMOS Newsletter Issue N 5 / January 2011 In this issue... Editorial Editorial p.1 Highlights p.2 Special topic: The IRMOS Repository p.5 Recent project outcomes p.6 Keep in touch with IRMOS p.8 Dear Reader,

More information

Software Quality. Richard Harris

Software Quality. Richard Harris Software Quality Richard Harris Part 1 Software Quality 143.465 Software Quality 2 Presentation Outline Defining Software Quality Improving source code quality More on reliability Software testing Software

More information

Desktop DNA r11.1. PC DNA Management Challenges

Desktop DNA r11.1. PC DNA Management Challenges Data Sheet Unicenter Desktop DNA r11.1 Unicenter Desktop DNA is a scalable migration solution for the management, movement and maintenance of a PC s DNA (including user settings, preferences and data).

More information

Fast Track Model Based Design and Development with Oracle9i Designer. An Oracle White Paper August 2002

Fast Track Model Based Design and Development with Oracle9i Designer. An Oracle White Paper August 2002 Fast Track Model Based Design and Development with Oracle9i Designer An Oracle White Paper August 2002 Fast Track Model Based Design and Development with Oracle9i Designer Executive Overivew... 3 Introduction...

More information

PROIV Annual Announcement Event 15 th July 2015

PROIV Annual Announcement Event 15 th July 2015 PROIV Annual Announcement Event 15 th July 2015 www.proiv.com PROIV Annual Announcements - July 15 th 2015 This year the PROIV announcement event delivered news and updates on the future of the PROIV Application

More information

Part III. Issues in Search Computing

Part III. Issues in Search Computing Part III Issues in Search Computing Introduction to Part III: Search Computing in a Nutshell Prior to delving into chapters discussing search computing in greater detail, we give a bird s eye view of its

More information

Data Warehouse and Data Mining

Data Warehouse and Data Mining Data Warehouse and Data Mining Lecture No. 07 Terminologies Naeem Ahmed Email: naeemmahoto@gmail.com Department of Software Engineering Mehran Univeristy of Engineering and Technology Jamshoro Database

More information

A UML SIMULATOR BASED ON A GENERIC MODEL EXECUTION ENGINE

A UML SIMULATOR BASED ON A GENERIC MODEL EXECUTION ENGINE A UML SIMULATOR BASED ON A GENERIC MODEL EXECUTION ENGINE Andrei Kirshin, Dany Moshkovich, Alan Hartman IBM Haifa Research Lab Mount Carmel, Haifa 31905, Israel E-mail: {kirshin, mdany, hartman}@il.ibm.com

More information

BSIF. A Freeware Framework for. Integrated Business Solutions Modeling. Using. Sparx Systems. Enterprise Architect

BSIF. A Freeware Framework for. Integrated Business Solutions Modeling. Using. Sparx Systems. Enterprise Architect 33 Chester Rd Tawa 5028 Wellington New Zealand P: (+64) 4 232-2092 m: (+64) 21 322 091 e: info@parkconsulting.co.nz BSIF A Freeware Framework for Integrated Business Solutions Modeling Using Sparx Systems

More information

Transformation of the system sequence diagram to an interface navigation diagram

Transformation of the system sequence diagram to an interface navigation diagram Transformation of the system sequence diagram to an interface navigation diagram William Germain DIMBISOA PhD Student Laboratory of Computer Science and Mathematics Applied to Development (LIMAD), University

More information

Teiid Designer User Guide 7.5.0

Teiid Designer User Guide 7.5.0 Teiid Designer User Guide 1 7.5.0 1. Introduction... 1 1.1. What is Teiid Designer?... 1 1.2. Why Use Teiid Designer?... 2 1.3. Metadata Overview... 2 1.3.1. What is Metadata... 2 1.3.2. Editing Metadata

More information

White Paper. Rose PowerBuilder Link

White Paper. Rose PowerBuilder Link White Paper Rose PowerBuilder Link Contents Overview 1 Audience...1 The Software Development Landscape...1 The Nature of Software Development...1 Better Software Development Methods...1 Successful Software

More information

Interface-based enterprise and software architecture mapping

Interface-based enterprise and software architecture mapping Interface-based enterprise and software architecture mapping Aziz Ahmad Rais Department of Information Technologies University of Economics, Prague Prague, Czech Republic aziz.rais@vse.cz aziz.ahmad.rais@gmail.com

More information

Klocwork Architecture Excavation Methodology. Nikolai Mansurov Chief Scientist & Architect

Klocwork Architecture Excavation Methodology. Nikolai Mansurov Chief Scientist & Architect Klocwork Architecture Excavation Methodology Nikolai Mansurov Chief Scientist & Architect Overview Introduction Production of software is evolutionary and involves multiple releases Evolution of existing

More information

A Capacity Planning Methodology for Distributed E-Commerce Applications

A Capacity Planning Methodology for Distributed E-Commerce Applications A Capacity Planning Methodology for Distributed E-Commerce Applications I. Introduction Most of today s e-commerce environments are based on distributed, multi-tiered, component-based architectures. The

More information

Introduction to ALM, UFT, VuGen, and LoadRunner

Introduction to ALM, UFT, VuGen, and LoadRunner Software Education Introduction to ALM, UFT, VuGen, and LoadRunner This course introduces students to the Application Lifecycle Management line products Introduction to ALM, UFT, VuGen, and LoadRunner

More information

Definition of Information Systems

Definition of Information Systems Information Systems Modeling To provide a foundation for the discussions throughout this book, this chapter begins by defining what is actually meant by the term information system. The focus is on model-driven

More information

Bringing DevOps to Service Provider Networks & Scoping New Operational Platform Requirements for SDN & NFV

Bringing DevOps to Service Provider Networks & Scoping New Operational Platform Requirements for SDN & NFV White Paper Bringing DevOps to Service Provider Networks & Scoping New Operational Platform Requirements for SDN & NFV Prepared by Caroline Chappell Practice Leader, Cloud & NFV, Heavy Reading www.heavyreading.com

More information

Minsoo Ryu. College of Information and Communications Hanyang University.

Minsoo Ryu. College of Information and Communications Hanyang University. Software Reuse and Component-Based Software Engineering Minsoo Ryu College of Information and Communications Hanyang University msryu@hanyang.ac.kr Software Reuse Contents Components CBSE (Component-Based

More information

Chapter 13: Architecture Patterns

Chapter 13: Architecture Patterns Chapter 13: Architecture Patterns SAiP Chapter 13 J. Scott Hawker/R. Kuehl p. 1 Len Bass, Paul Clements, Rick Kazman, Topics What is a Pattern? Pattern Catalog Module patterns Component and Connector Patterns

More information

Achieving Right Automation Balance in Agile Projects

Achieving Right Automation Balance in Agile Projects Achieving Right Automation Balance in Agile Projects Vijayagopal Narayanan Vijayagopal.n@cognizant.com Abstract When is testing complete and How much testing is sufficient is a fundamental questions that

More information

Introduction to the RAMI 4.0 Toolbox

Introduction to the RAMI 4.0 Toolbox Introduction to the RAMI 4.0 Toolbox Author: Christoph Binder Version: 0.1 Date: 2017-06-08 Josef Ressel Center for User-Centric Smart Grid Privacy, Security and Control Salzburg University of Applied

More information

IBM Rational Developer for System z Version 7.5

IBM Rational Developer for System z Version 7.5 Providing System z developers with tools for building traditional and composite applications in an SOA and Web 2.0 environment IBM Rational Developer for System z Version 7.5 Highlights Helps developers

More information

XBS Application Development Platform

XBS Application Development Platform Introduction to XBS Application Development Platform By: Liu, Xiao Kang (Ken) Xiaokang Liu Page 1/10 Oct 2011 Overview The XBS is an application development platform. It provides both application development

More information

Linking ITSM and SOA a synergetic fusion

Linking ITSM and SOA a synergetic fusion Linking ITSM and SOA a synergetic fusion Dimitris Dranidis dranidis@city.academic.gr CITY College, Computer Science Department South East European Research Centre (SEERC) CITY College CITY College Founded

More information

DOMAIN ENGINEERING OF COMPONENTS

DOMAIN ENGINEERING OF COMPONENTS 4-02-55 INFORMATION MANAGEMENT: STRATEGY, SYSTEMS, AND TECHNOLOGIES DOMAIN ENGINEERING OF COMPONENTS Carma McClure INSIDE Definition of Components; Component-Based Development; Reuse Processes; Domain

More information

Product Range 3SL. Cradle -7

Product Range 3SL. Cradle -7 Cradle -7 From concept to creation... 3SL Product Range PRODUCT RANGE HIGHLIGHTS APPLIES TO AGILE AND PHASE PROJECTS APPLICATION LIFECYCLE MANAGEMENT REQUIREMENTS MANAGEMENT MODELLING / MBSE / SYSML /

More information

EUROPEAN ICT PROFESSIONAL ROLE PROFILES VERSION 2 CWA 16458:2018 LOGFILE

EUROPEAN ICT PROFESSIONAL ROLE PROFILES VERSION 2 CWA 16458:2018 LOGFILE EUROPEAN ICT PROFESSIONAL ROLE PROFILES VERSION 2 CWA 16458:2018 LOGFILE Overview all ICT Profile changes in title, summary, mission and from version 1 to version 2 Versions Version 1 Version 2 Role Profile

More information

CS 575: Software Design

CS 575: Software Design CS 575: Software Design Introduction 1 Software Design A software design is a precise description of a system, using a variety of different perspectives Structural Behavioral Packaging Requirements, Test/Validation

More information

Module 7 TOGAF Content Metamodel

Module 7 TOGAF Content Metamodel Module 7 TOGAF Content Metamodel V9 Edition Copyright January 2009 All Slide rights reserved 1 of 45 Published by The Open Group, January 2009 TOGAF Content Metamodel TOGAF is a trademark of The Open Group

More information

IBM API Connect: Introduction to APIs, Microservices and IBM API Connect

IBM API Connect: Introduction to APIs, Microservices and IBM API Connect IBM API Connect: Introduction to APIs, Microservices and IBM API Connect Steve Lokam, Sr. Principal at OpenLogix @openlogix @stevelokam slokam@open-logix.com (248) 869-0083 What do these companies have

More information

Chapter 4. Fundamental Concepts and Models

Chapter 4. Fundamental Concepts and Models Chapter 4. Fundamental Concepts and Models 4.1 Roles and Boundaries 4.2 Cloud Characteristics 4.3 Cloud Delivery Models 4.4 Cloud Deployment Models The upcoming sections cover introductory topic areas

More information

ACCENTURE & RED HAT ACCENTURE CLOUD INNOVATION CENTER

ACCENTURE & RED HAT ACCENTURE CLOUD INNOVATION CENTER ACCENTURE & RED HAT ACCENTURE CLOUD INNOVATION CENTER HYBRID CLOUD MANAGEMENT & OPTIMIZATION DEVOPS FOR INFRASTRUCTURE SERVICES ACCENTURE CLOUD INNOVATION CENTER PUSHING CUSTOM CLOUD SOLUTIONS TO THE MAX.

More information

D43.2 Service Delivery Infrastructure specifications and architecture M21

D43.2 Service Delivery Infrastructure specifications and architecture M21 Deliverable D43.2 Service Delivery Infrastructure specifications and architecture M21 D43.2 Service Delivery Infrastructure specifications and architecture M21 Document Owner: Contributors: Dissemination:

More information

UML, SysML and MARTE in Use, a High Level Methodology for Real-time and Embedded Systems

UML, SysML and MARTE in Use, a High Level Methodology for Real-time and Embedded Systems UML, SysML and MARTE in Use, a High Level Methodology for Real-time and Embedded Systems Alessandra Bagnato *, Imran Quadri and Andrey Sadovykh * TXT e-solutions (Italy) Softeam (France) Presentation Outline

More information

Migrating a Business-Critical Application to Windows Azure

Migrating a Business-Critical Application to Windows Azure Situation Microsoft IT wanted to replace TS Licensing Manager, an application responsible for critical business processes. TS Licensing Manager was hosted entirely in Microsoft corporate data centers,

More information

Chapter 6 Architectural Design. Lecture 1. Chapter 6 Architectural design

Chapter 6 Architectural Design. Lecture 1. Chapter 6 Architectural design Chapter 6 Architectural Design Lecture 1 1 Topics covered ² Architectural design decisions ² Architectural views ² Architectural patterns ² Application architectures 2 Software architecture ² The design

More information

INTRODUCING A MULTIVIEW SOFTWARE ARCHITECTURE PROCESS BY EXAMPLE Ahmad K heir 1, Hala Naja 1 and Mourad Oussalah 2

INTRODUCING A MULTIVIEW SOFTWARE ARCHITECTURE PROCESS BY EXAMPLE Ahmad K heir 1, Hala Naja 1 and Mourad Oussalah 2 INTRODUCING A MULTIVIEW SOFTWARE ARCHITECTURE PROCESS BY EXAMPLE Ahmad K heir 1, Hala Naja 1 and Mourad Oussalah 2 1 Faculty of Sciences, Lebanese University 2 LINA Laboratory, University of Nantes ABSTRACT:

More information

Getting Hybrid IT Right. A Softchoice Guide to Hybrid Cloud Adoption

Getting Hybrid IT Right. A Softchoice Guide to Hybrid Cloud Adoption Getting Hybrid IT Right A Softchoice Guide to Hybrid Cloud Adoption Your Path to an Effective Hybrid Cloud The hybrid cloud is on the radar for business and IT leaders everywhere. IDC estimates 1 that

More information

Introduction. Delivering Management as Agile as the Cloud: Enabling New Architectures with CA Technologies Virtual Network Assurance Solution

Introduction. Delivering Management as Agile as the Cloud: Enabling New Architectures with CA Technologies Virtual Network Assurance Solution Delivering Management as Agile as the Cloud: Enabling New Architectures with CA Technologies Virtual Network Assurance Solution Introduction Service providers and IT departments of every type are seeking

More information

Enterprise Architect. User Guide Series. Domain Models

Enterprise Architect. User Guide Series. Domain Models Enterprise Architect User Guide Series Domain Models What support for modeling domains? Sparx Systems Enterprise Architect supports a range of modeling languages, technologies and methods that can be used

More information

Web Services Annotation and Reasoning

Web Services Annotation and Reasoning Web Services Annotation and Reasoning, W3C Workshop on Frameworks for Semantics in Web Services Web Services Annotation and Reasoning Peter Graubmann, Evelyn Pfeuffer, Mikhail Roshchin Siemens AG, Corporate

More information

Chapter 6 Architectural Design

Chapter 6 Architectural Design Chapter 6 Architectural Design Chapter 6 Architectural Design Slide 1 Topics covered The WHAT and WHY of architectural design Architectural design decisions Architectural views/perspectives Architectural

More information

ArchiMate 2.0. Structural Concepts Behavioral Concepts Informational Concepts. Business. Application. Technology

ArchiMate 2.0. Structural Concepts Behavioral Concepts Informational Concepts. Business. Application. Technology ArchiMate Core Structural Concepts Behavioral Concepts Informational Concepts interaction Technology Application Layer Concept Description Notation Concept Description Notation Actor An organizational

More information