DEPARTMENT OF INFORMATICS. A Model-Driven JSON Editor

Size: px
Start display at page:

Download "DEPARTMENT OF INFORMATICS. A Model-Driven JSON Editor"

Transcription

1 DEPARTMENT OF INFORMATICS TECHNISCHE UNIVERSITÄT MÜNCHEN Master s Thesis in Informatics A Model-Driven JSON Editor Lucas Daniel Köhler

2

3 DEPARTMENT OF INFORMATICS TECHNISCHE UNIVERSITÄT MÜNCHEN Master s Thesis in Informatics A Model-Driven JSON Editor Ein Modell-basierter JSON Editor Author: Lucas Daniel Köhler Supervisor: Prof. Dr. Florian Matthes Advisors: Adrian Hernandez-Mendez, Dr. Jonas Helming Submission Date:

4

5 I confirm that this master s thesis is my own work and I have documented all sources and material used. Munich, Lucas Daniel Köhler

6

7 Acknowledgments I want to thank my advisors Adrian Hernandez-Mendez and Dr. Jonas Helming for their ongoing support and advice during the whole process of writing my master s thesis. I also want to thank Prof. Dr. Florian Matthes for the opportunity to write this thesis at his chair Software Engineering for Business Information Systems (SEBIS). I want to thank Eugen Neufeld for his guidance during the implementation phase of this thesis. Lastly I want to thank my family and all my friends who always supported me.

8

9 Abstract Many engineering domains require the input and modification of structured data (also known as models). This structured data is usually defined by a meta-model (e.g. in JSON Schema). To modify instances of structured data, users require proper tooling. The manual implementation of suitable tooling is costly, leads to re-implementing common features, and causes additional development effort whenever the meta-model changes. In this thesis, we develop a model-driven framework to semi-automatically create an editor for a given meta-model. This editor allows the creation and modification of structured data as specified by the meta-model. The goal is to minimize the manual work required to create and maintain such an editor. As a preparation, we conduct an extensive analysis of the requirements of structured data tools. First, we conduct a literature review on multilevel modeling tool requirements. Second, we complement the results with requirements gathered by a tool analysis of nine data editors. Based on this, we design and implement the model-driven editor framework. Thereby, we focus on allowing to create an editor with minimal effort while simultaneously providing extensive options to customize and extend the generated editors. We evaluate our framework by configuring and creating three editors for the existing meta-models Ecore, JSON Schema, and UI Schema. Finally, we successfully evaluate the usability of editors created by our framework by conducting a System Usability Scale test for the UI Schema Editor. It achieves a score of 79.5 out of 100 points. vii

10

11 Zusammenfassung Viele Bereiche der Ingenieurswissenschaften erfordern die Eingabe und Modifikation von strukturierten Daten (auch bekannt als Modelle). Strukturierte Daten sind meistens durch ein Meta-Modell definiert (z.b. ein JSON Schema). Um Instanzen dieser strukturierten Daten zu bearbeiten, benötigen Anwender geeignete Werkzeuge. Die manuelle Implementierung dieser Werkzeuge verursacht hohe Kosten, führt zur Wiederentwicklung gemeinsamer Funktionalität und verursacht zusätzlichen Entwicklungsaufwand bei jeder Änderung des Meta-Modells. In dieser Arbeit entwickeln wir ein modell-basiertes Framework zur halbautomatischen Erzeugung eines Editors für ein gegebenes Meta-Modell. Dieser Editor ermöglicht die Erzeugung und Bearbeitung von strukturierten Daten, die durch das Meta-Modell spezifiziert sind. Das Ziel ist die Minimierung des manuellen Aufwandes, der zur Erzeugung und Wartung eines solchen Editors benötigt wird. Als Vorbereitung führen wir eine umfangreiche Anforderungsanalyse von Programmen für strukturierte Daten durch. Zuerst führen wir eine Literaturrecherche zu Anforderungen von Multilevel Modellierungsprogrammen durch. Wir ergänzen die Ergebnisse mit Anforderungen, die wir durch eine Analyse von neun Dateneditoren ermitteln. Auf dieser Basis entwerfen und implementieren wir das modell-basierte Editor-Framework. Hierbei achten wir besonders darauf, dass Editoren mit minimalem Aufwand erzeugt werden können und trotzdem umfangreiche Möglichkeiten zur Anpassung und Erweiterung bieten. Wir evaluieren unser Framework, indem wir drei Editoren für die existierenden Meta- Modelle Ecore, JSON Schema und UI Schema erzeugen und konfigurieren. Abschließend evaluieren wir erfolgreich die Benutzerfreundlichkeit von Editoren, die mit unserem Framework erzeugt wurden, indem wir einen System Usability Scale Test mit dem UI Schema Editor durchführen. Er erzielt ein Ergebnis von 79,5 von 100 Punkten. ix

12

13 Contents Acknowledgments Abstract Zusammenfassung v vii ix 1. Introduction 1 2. State of the Art Editor Generation Framework Approach Literature Review on Requirements of Multilevel Modeling Tools Introduction Scope and Concepts Search Process Literature Analysis and Synthesis Tool Analysis Analyzed Editors Tool Requirements Requirements Priorities Functional Requirements Must Have Desirable Nice To Have Implementation Constraints Quality Attributes Implementation Architecture Editor Renderer Services JsonForms Design Rendering Process Tree Renderer Drag and Drop xi

14 Contents 4.6. Detail Rendering Parser Retrieve Containment Properties Self Contain a Schema Reference Resolving in Schemata Validation References Links Resource Set ID-based References Path-based References Customization Image Mapping Label Mapping Model Mapping UI Schemata Resources Configuration Object Testing Evaluation Evaluation Languages Ecore Json Schema UI Schema Editor Customization Process General Ecore UI Schema JSON Schema Advantages and Limitations Advantages Limitations Usability Test Evaluation Scenario Results Comparison to a Specific Editor Conclusion and Future Work 95 List of Figures 99 List of Listings 101 xii

15 Contents List of Tables 103 Bibliography 105 Appendix A. Literature Review 113 A.1. Search Queries A.1.1. Deep Meta-Modelling A.1.2. Domain-Specific Modelling Language A.1.3. Language Workbench A.1.4. Multi-Level Modelling A.2. Excluded Areas A.2.1. Deep Meta-Modelling A.2.2. Domain-Specific Modelling A.2.3. Language Workbench A.2.4. Multi-Level Modelling Appendix B. Evaluation of the UI Schema Editor 119 B.1. Evaluation Schema B.2. System Usability Scale Items B.3. UI Schema Editor Missing Features B.4. Comparison B.4.1. Which Editor Did You Prefer and Why? B.4.2. What Are the Advantages of the Json Forms Editor? B.4.3. What Are the Advantages of the UI Schema Editor? Appendix C. Repositories 123 xiii

16

17 1. Introduction Many engineering domains require the input, creation, modification, and export of structured data. Prevalent examples in the software engineering domain are UML and the Package JSON for Node modules 1. Another example from the engineering domain is the automotive standard AUTOSAR 2. Structured data is usually defined by a meta-model. Well-known examples for such meta-models are JSON Schema 3, the several schemata languages to define XML schemata 4, and the Ecore language of the Eclipse Modeling Framework 5. Manually creating and modifying structured data by hand is error-prone, cumbersome and lacks validation of the defined data. For instance, when defining a reference between two elements, the user cannot know whether it specifies a valid target. Also, looking at data with cascaded hierarchies without proper tooling gets confusing rather quickly. As a consequence, specific tooling for modifying structured data is needed. We observe three main types of editors, depending on the use case: text-based editors (e.g. Xtext 6 ), graphical editors (e.g. Enterprise Architect 7 for UML), and form-based editors (e.g. Ecore tooling in EMF 8 ). In this thesis, we focus on form-based editors. The manual implementation of such an editor for every meta-model has several drawbacks. (1) Developing a professional editor has high development costs. (2) Changes to the meta-model cause additional costs because the editor has to be adapted manually. (3) Having multiple editors leads to code duplication for common functionality. For instance, functionality such as import and export, data validation, type-sensitive editing, or data binding between view and data is needed in most editors. Consequently, there are frameworks to generate editors for a given meta-model. Those are based on desktop technologies, e.g. EMF and EMF Forms 9. However, to the best of our knowledge, no comparable framework exists for a form-based editor based on web technologies. Therefore, it is desirable to have such an editor generation framework for usage in web applications. In present times, a multitude of devices and operating systems are used to access data. For instance, personal computers operating Windows, Mac OS,

18 1. Introduction or Linux. Phones and tablets mostly operating ios or Android are widespread, too. Implementing software as a web application allows it to potentially run on all these devices as it simply runs in a web browser. Compared to developing multiple native applications, this saves time and developing costs. Especially in the context of mobile applications, this prevents needing various native technologies [14, 57] and offers fast development, simple maintenance, and full application portability [57]. Thereby, mobile web apps do not have major drawbacks compared to native apps as long as no native hardware (e.g. GPS) of the device is used [40]. Another advantage of having such an editor framework implemented in web technologies is the increased development of web-based IDEs like Eclipse CHE 10 and Theia 11 or the Electron-based Atom 12. Using compatible technologies, the editor framework can then be integrated into these IDEs for software engineering use cases. As we can see, there is a need for the generation of web-based editors for structured data. Therefore, we propose the development of a model-driven editor framework which allows editing of structured data based on a given meta-model. The framework must be applicable to very different meta-models in order to allow creating editors for the various use cases involving structured data. Therefore, the framework should provide extensive configuration and extension possibilities to allow using it for various use cases. At the same time, the configuration effort to get the editor running for a given meta-model should be as low as possible. Due to JSON s popularity [7, 66] and efficiency [52, 65] we base our framework on JSON Schema 13 which allows specifying JSON data. To address these challenges and ensure developing a well-founded and relevant framework, we need a scientifically proven design process. Therefore, we base our approach on the three cycles of design science research [36]. Figure 1.1 shows the adaption of the approach to this thesis. Thereby, the information exchange between Environment and Design Science Research is the Relevance Cycle. Correspondingly, the Rigor Cycle connects the Design Science Research and the Knowledge Base. The Design Cycle is the feedback and refinement loop inside the Design Science Research. Before we can design and build the editor framework in the Design Cycle, we must determine what exactly we want to build. From this follows the first research question: RQ 1: What are the requirements of the model-driven JSON editor? We analyze the editor s requirements from two angles: Rigor and Relevance. To investigate the existing scientific knowledge base in adherence to the Rigor Cycle, we conduct an extensive literature review in section 2.2. This provides us with a wide array of concepts and requirements determined in past research. As a first part of the Relevance Cycle, we analyze nine tools for the creation of structured data in section 2.3. This provides us with concrete requirements from the application domain. Together, the

19 Environment Design Science Research Knowledge Base Application Domain - Industry Experts in Structured Data - Modeling Experts - Developers Creating Data Editors - Existing Structured Data Tools Requirements Industry Needs Build & Design - JSON Editor Framework Feedback Evaluate Refine Requirements Concepts Concepts / Foundations - Deep Meta-Modeling - DSMLs - Language Workbench - Multi-Level Modeling - Instantiate Editors - Usability Test Software Artifacts Thesis Results Figure 1.1.: The Design Science Approach of this Thesis. Source: Own diagram created after [36]. two analysis give us a broad base of potential requirements for the editor framework. Based on these results, we answer the second research question: RQ 2: What are the editor s architecture and design? This includes selecting the relevant requirements, defining the architecture and design, and implementing the framework. As a second step in the Relevance Cycle, we determine and prioritize our editor framework s requirements by conducting an interview with industry experts in the application domain of structured data. The results are described in chapter 3. Based on these, we can define the framework s architecture, design, and implementation details as the first part of the Design Cycle in chapter 4. As a second part of the Design Cycle, we need to evaluate our implementation. This leads us to the final research question: RQ 3: What are the limitations of the editor to generate modern web forms? Therefore, we evaluate our implemented editor framework in two ways. First, we instantiate editors for meta-models defining Ecore, JSON Schema, and UI Schema. Second, we conduct a usability test of the created UI Schema editor. Both is described in chapter 5. As a final step of the Rigor Cycle, the created artifacts and gained experience is contributed back to the scientific knowledge base in form of this thesis. Furthermore, the implemented software is contributed back to the environment as open-source software. Finally, a conclusion of the achieved results, as well as starting points for further research, are presented in chapter 6. 3

20

21 2. State of the Art In this chapter, we analyze the current state of the art in modeling tools. Therefore, we first give a short introduction to the approach of an editor generation framework in section 2.1. Subsequently, we gather a comprehensive collection of requirements. This collection will be the foundation for the prioritization of the model-driven JSON editor s requirements. Therefore, we analyze the existing requirements from two perspectives: academically and industry focused. To get a well-founded overview over the past and current views in academia, we conduct an extensive literature review in section 2.2. As requirements and concepts in literature are often described on a more abstract level, we also analyze the implemented requirements of nine model editors in section 2.3. By relating the practical requirements to the ones found in the literature review, we get more concrete implementation requirements for part of the concepts determined in the literature review Editor Generation Framework Approach In this section, we introduce the concept of a model-driven editor generation framework. Such a framework allows to semi-automatically generate an editor for a given data model by additionally providing a view model which defines the editor s user interface. This approach is illustrated in Figure 2.1. The data model defines the legal data which will be creatable with the generated editor. Thereby, the data model defines the data s structure as well as the properties that the created data objects may contain. The view model defines how data objects of a corresponding data model are rendered. For instance, this can include editable properties or the definition of labels or icons for objects. Thereby, the view model references the elements of the data model which it configures. As a consequence, the view model is only valid for its corresponding data model Literature Review on Requirements of Multilevel Modeling Tools In this section, we describe the conducted process and the results of our literature review about the requirements of multilevel modeling tools. 5

22 2. State of the Art Data Model View Model Editor Generation Framework Editor Figure 2.1.: The Editor Generation Approach Introduction To implement a model-driven editor framework that allows the semi-automatic creation of an editor, we need to determine the requirements of a model-driven editor. In this context, model-driven means that the concepts instantiable in an editor created with the framework are defined by its underlying model. Hence, this model is the meta-model defining the modeling language supported by the editor [3]. As a model generally abstracts the concepts of a domain [30, p. 4] and a domain-specific modeling language (DSML) is described by a meta-model [45, p. 1], the editor allows to create instances of a DSML. Furthermore, the editor could be used to define new DSMLs by choosing a metameta-model whose concepts are instantiable to a new meta-model [3]. This suggests searching for requirements of a DSML tool. As pointed out in [45, 48], one typical meta-modeling standard is the Meta-Object Facility (MOF) 14. The Object Management Group suggests it as the meta-modeling technique for Model-Driven Architecture [58]. Furthermore, it is implemented by the well-known Eclipse Modeling Framework 15. These two approaches are limited to two neighboring meta-modeling levels where the higher level is the meta-model and the lower one the instance [45, 48]. This results in the problem that language designers often have to express multi-layer concepts in one meta-model [48, p. 2], e.g. by defining type-instance relations in the metamodel [45, p. 2]. Furthermore, there exists a fundamental design conflict when defining DSMLs in one meta-layer. On the one hand, the more specific a DSML is tailored to a domain, the better the support for suitable use cases. But on the other hand, less specific DSMLs provide better reusability. This makes it hard to determine the appropriate level of specificness for a DSML [28, pp. 2-3]. The aforementioned problems can be solved by allowing an arbitrary number of metamodeling levels. Thereby, every model is automatically a meta-model for the next lower level [28, 45, 48]. This approach is called multilevel modeling [28]. Further advantages of multilevel modeling include lower model complexity, easier to use DSMLs, improved

23 2.2. Literature Review on Requirements of Multilevel Modeling Tools integration [28], improved separation of concerns, and simplified administration of standards [38]. This leads us to set the topic of this literature review to the analysis of requirements of multilevel modeling tools. Our contribution will be an overview over the requirements and features of multilevel modeling tools. To get comprehensible and reproducible results, our research process takes its bearing on the processes defined by Webster and Watson in [88] and vom Brocke et al. in [11]. An overview over our resulting approach is shown in Figure 2.2. First, we determine the search concepts and the scope of the search in subsection Subsequently, we use this as the starting point for our iteratively conducted, database-driven process of searching for relevant references in subsection Afterwards, we analyze the resulting literature to extract 104 requirements and aggregate them into 16 categories in subsection Scope and Concepts In this section, we describe the scope of our research, explain which concepts are included in the search, and elaborate why these concepts are relevant in our scope. The scope of our review is the investigation of requirements for multilevel modeling tools. Because not all relevant literature explicitly talks about multilevel modeling tools, we need to define synonyms and related terms. This is divided into two parts. First, we determine the concepts related to multilevel modeling. Second, we establish synonym terms for tool in our research context Research of Related Concepts From our research experience, we know that Frank [28] provides a relevant definition of multilevel modeling and describes requirements for multilevel modeling tools. Furthermore, we investigate Fowler s essential article [26] providing the basis for today s understanding of language workbenches. The concept of language workbenches is relevant to our research because both multilevel modeling tools and language workbenches are about efficiently defining and using domain-specific languages [26, 28]. Because Frank [28] introduces multilevel modeling as an improved technique for the development and usage of DSMLs, we also consider this concept in our initial research to get an overview of relevant concepts. The iterative usage of Google, Google Scholar, and the analysis of backward references in researched literature led to the relevant search concepts described in the following section Relevant Search Concepts Multilevel modeling. Multilevel modeling is an approach for the specification of DSMLs which allows any number of modeling levels and every model can be used as a 7

24 2. State of the Art Search Concepts Scope Iterative Search Scopus 16 relevant articles identification of relevant articles 104 requirements requirement extraction 16 categories requirement categorization Figure 2.2.: Overview over the Literature Review s Research Process 8

25 2.2. Literature Review on Requirements of Multilevel Modeling Tools meta-model for the next lower level [28, 45]. Deep meta-modeling. Deep meta-modeling is used as a name for the same concepts as the aforementioned multilevel modeling [45, 48] and even declared as a synonym [38, 69]. Consequently, we need to search for literature about deep meta-modeling to not miss it in case it does not label itself as multilevel modeling, too. Language workbench. The term language workbench was coined by Martin Fowler [26] in It allows users to define new domain-specific languages (DSLs) and integrate them with each other. Thereby, a DSL is defined as a trio of editor(s), generator(s), and a schema [26]. Lamo et al. [45] describe this as an IDE-like environment for creating DSML/DSLs [45, p. 2] which is used to develop DSMLs and corresponding tools as well as working with the created DSMLs. Thus, we need to consider language workbenches in our research. Domain-specific modeling language (DSML). The aforementioned concepts all have in common that they are used to define and use DSMLs. The editor for designing one layer of a multilevel DSML in a tool is comparable to the editor for designing a DSML in a two-layer architecture. Multileveled-ness can then be achieved by re-instantiating the editor for the next lower level with the designed DSML as its meta-model [28]. Consequently, we consider the requirements of DSML tools in our research. Relevant synonyms of tool in our context. Literature uses multiple different words to essentially describe tools for the aforementioned concepts. The relevant synonyms we discovered are editor [21, 44], framework [48, 59], ide [17, 45, 80], and tool [21, 25, 28, 44, 45] Search Process In this section, we describe the search process to get relevant literature for our review. All searches are conducted with Elsevier s Scopus 16, the self-proclaimed largest abstract and citation database of peer-reviewed literature. Figure 2.3 shows the conducted search process. First, we define query limitations which apply to the searches of all four search concepts in section Next, we perform separate searches for every search concept. Thereby, we iteratively limit the queries and filter the resulting references by analyzing their abstracts and keywords in section The result is four sets of references related to our topic. We merge these sets into one by removing duplicates, add one more reference, and determine the relevant references by reading them in section

26 2. State of the Art Concept Independent Query Limitations Deep Meta- Modelling DSML Language Workbench Multi-Level Modelling Iterative Search Iterative Search Iterative Search Iterative Search filter by abstract and keywords Merge additional reference read references and determine relevant ones 17 relevant references Figure 2.3.: Literature Review Search Process. 10

27 2.2. Literature Review on Requirements of Multilevel Modeling Tools Concept Independent Query Limitations This section describes properties of all queries used with Scopus to find relevant literature. In the description of the concept-specific queries, these parts are automatically part of every query without being mentioned again. Modeling spelling. In our context, modeling can either be spelled with two l, e.g. in [2, 30, 38, 48], or one l, like in e.g. [28, 59]. Therefore, both spellings have to be considered when querying databases for literature. This is done by substituting either spelling with an OR combination of both spellings. Keyword Search Fields. All required or excluded phrases are searched for in the abstract title, the abstract, the author keywords, or index keywords of the searched literature. Tool Synonyms. Every query requires one or more of the four tool synonyms (editor, framework, ide, tool) to be present. Limitations. All queries are limited to literature from the subject area Computer Science and article language English. Search period. All searches were conducted between the and Consequently, literature added to Scopus after the is not considered in our review Query Limitation Process In this section, we describe the queries and their limitation processes for the four concepts determined in section Thereby, every query extends the foregoing one. This means all restrictions of the foregoing query also apply. The restrictions described in section are implicitly applied to every query. An overview of the query limitation process is shown in Table 2.1. The table shows the number of resulting references for every iteration and links to the corresponding query respectively the excluded areas for the last iteration of every search concept. The details of the process are described below. Deep Meta-Modeling All deep meta-modeling queries use four alternative spellings of meta-modeling: metamodelling, meta-modeling, metamodelling, and metamodeling. The first query (see the query in Listing A.1) requires the keywords deep and one of the meta-modeling spellings. This results in 27 hits. 11

28 2. State of the Art Table 2.1.: Summary of the Query Limitation Process Iteration Hits Query / Areas Comment Deep Meta-Modelling 1 27 Listing A.1 Initial query 2 24 Listing A.2 Exclude conference proceedings 3 20 Listing A.3 Exclude keywords 4 14 Appendix A.2.1 Exclude irrelevant areas Domain-Specific Modelling Language Listing A.4 Initial query Listing A.5 Exclude conference proceedings Listing A.6 Exclude keywords 4 54 Listing A.7 Stricter query matching 5 27 Appendix A.2.2 Exclude irrelevant areas Language Workbench 1 34 Listing A.8 Initial query 2 21 Listing A.9 Exclude older than Appendix A.2.3 Exclude irrelevant areas Multi-Level Modelling 1 75 Listing A.10 Initial query 2 66 Listing A.11 Exclude older than Listing A.12 Exclude conference proceedings 4 42 Listing A.13 Exclude keywords 5 14 Appendix A.2.4 Exclude irrelevant areas 12

29 2.2. Literature Review on Requirements of Multilevel Modeling Tools Next (see the query in Listing A.2), we filter out all conference proceedings, or in Scopus called conference review, because they do not contain a single paper but simply are the collection of all papers of a conference. This results in 24 hits. Next (see the query in Listing A.3), we filter out all results matching one or more of the following keywords as these indicate that the literature is not related to our topic: "deep drawing", "analog circuits", biosensors, "neural network". This results in 20 hits. Next, we analyze the abstract, author keywords, and index keywords to filter out literature which focuses on another area than our research. This leaves us with 14 hits. The excluded areas are listed in appendix A.2.1. Domain-Specific Modeling Language The initial query (see the query in Listing A.4) requires the keywords domain-specific, modeling, and language. This results in 217 hits. Next (see the query in Listing A.5), we filter out all conference proceedings, or in Scopus called conference review, because they do not contain a single paper but simply are the collection of all papers of a conference. This leaves us with 195 hits. Next (see the query in Listing A.6), we filter out all results matching one or more of the following keywords as these indicate that the literature is not related to our topic: internet of things, medicine, medical, "assisted living", VHDL, "artificial intelligence", multi-agent, multiagent, autocrud, "cyber-physical systems", "embedded systems". This results in 137 hits. In the next query (see the query in Listing A.7), we make the keyword matching more strict. The keyword domain-specific now has to precede the keyword modeling within two words. Furthermore, we also allow the keyword languages in addition to language in case literature only talks in plural about domain-specific modeling languages. Instead of only matching the query-part for domain-specific modeling language, we allow matching the shortcuts DSML and DSMLs instead. This results in 54 hits. Next, we analyze the abstract, author keywords, and index keywords to filter out literature which focuses on another area than our research. This leaves us with 27 hits. The excluded areas are listed in appendix A.2.2. Language Workbenches The initial query (see the query in Listing A.8) requires the keyword language workbench and the keyword requirements. This results in 34 hits. In the next query (see the query in Listing A.9), we limited the query to results from the year 2005 or newer because the term language workbench was only defined by Fowler [26] in Consequently, articles older than that do not adhere to our definition of language workbenches. This results in 21 hits. Next, we analyze the abstract, author keywords, and index keywords to filter out literature which focuses on another area than our research. This leaves us with 13 hits. 13

30 2. State of the Art The excluded areas are listed in appendix A.2.3. Multilevel Modeling All multilevel modeling queries used two alternative spellings of multilevel: multilevel (like used in [28]) and multi-level (like in [45]). One of the spellings is required by every query. The first query (see Listing A.10) additionally requires the keywords of modeling and requirements. This results in 75 hits. In the next query (see Listing A.11) we require all results to be from the year 2001 or newer. This is justified because the oldest foundational article [4] about multilevel modeling known to us and the wiki 17 of the International Workshop on multilevel Modelling 18 was published in This leaves us with 66 hits. Next (see the query in Listing A.12), we filter out all conference proceedings, or in Scopus called conference review, because they do not contain a single paper but simply are the collection of all papers of a conference. This leaves us with 55 hits. Next (see the query in Listing A.13), we filter out all results matching one or more of the following keywords as these indicate that the literature is not related to our topic: "internet of things", cellular, music, hospital, "financial audits", antennas, HLPSL, "IP cores". Next, we analyze the abstract, author keywords, and index keywords to filter out literature which focuses on another area than our research. This leaves us with 14 hits. The excluded areas are listed in appendix A Selection of Relevant Literature Taking all hits after the last iteration of the query limitation for all four search concepts gives us 61 unique hits. After removing [51] because we could not get access to it, this leaves us with the following 60 articles: [1 3, 8 10, 13, 15, 16, 18 20, 22, 24, 27 29, 31, 32, 34, 35, 37 39, 41 43, 46 50, 53 56, 61 64, 67 72, 74 79, 81 87, 89]. In order to get the literature relevant to our research, we read these 60 articles to select the ones that talk about requirements of tools for at least one of the four search concepts. Thereby, requirements can be described in two ways. The first one is to define them explicitly by stating requirements directly. An example for this from the relevant literature is: a meta-modeling environment should allow the generation of a model editor to a wide extent from a metamodel of a DSML [28, p. 6]. The second alternative is describing them indirectly by explaining the features of a tool developed or analyzed in the article. An example for this from the relevant literature is: The XLM does not only allow the user to perform typical modifications like adding and removing elements from the model, but it also supports changes of the types of model elements at runtime [20, p. 3]. Applying this filter criterion results in 16 relevant articles

31 2.2. Literature Review on Requirements of Multilevel Modeling Tools We add Fowler s article about language workbenches [26] to the set of relevant literature because it founded the research in language workbenches by defining the term and provides requirements for them. Furthermore, the article could not be found with Scopus, as it was not published in a journal. This results in the final set of 17 relevant articles to analyze: [8, 18, 20, 26, 28, 29, 35, 46 48, 56, 61, 62, 76, 79, 81, 85] Literature Analysis and Synthesis In this subsection, we analyze the requirements discussed in the 17 relevant references resulting from the literature search. All these references were published between 2005 and Thereby, 14 of the references are from 2010 or newer. Seven references are even from 2014 or newer. This indicates a steady interest in the topic from 2010 onwards. We analyze the content of the references to extract requirements and aggregate them to categories. Thereby, we first extract 104 requirements. Subsequently, we aggregate them to 16 categories by using inductive reasoning. The result of this analysis is shown in Table 2.2. In order to provide traceability for the extracted requirements, we provide the references for each one. Below, we describe the determined categories in context of their contained requirements. Table 2.2.: Requirements for Multi-Level Modelling Tools Category Requirement References Define DSMLs with UML-like notation [28] Immediately show class extensions in lower level classes [28] Represent language architectures [28] Integration of multiple DSMLs into one editor [28] Navigation through modelling levels [28] Visualize model elements with types and relations [56] Modelling tasks executed in a visual representation [20] Separate diagrams for different modelling levels and their relations [20] Separate diagrams for instantiations between modelling levels [20] Model Representation Represent model in multiple ways with different projections [85] Separation of editable and storage representation [26] Edit a DSL s abstract representations through a projectional editor [26, 85] Graphical editor to design models [76] Editor allows to switch between multiple views: diagram, matrix, table [76] Creation of multi-language diagrams [29] Definition of graphical representations for language elements [81] Provide mapping between abstract syntax and graphical representation [81] Template-based syntax definition for models [18] DSL s abstract representation can handle errors and ambiguities [26] continued on next page 15

32 2. State of the Art Model Creation Model Update Define generic model templates that can be instantiated to models [47] Limit element instance extension with new attributes [18] Extend element instances with new attributes [48] Define instantiation modelling level of model elements with intrinsic features [28] Define instantiation meta-level of models and model elements with potency [18, 48, 79] Define languages that are integrated with each other [26] Define DSL as a metamodel including the domain concepts and rules [76] Extend languages with inheritance and concept extension [85] Classes of different modelling levels in the same model [28] Explicit modelling level definition for classes [28] Definition of language elements and their legal configurations [8, 85] Add elements to model [8, 20] Create instances for any ontological type independent of its definition level [56] Editor for model creation [29] Definition of intrinsic attributes [29] Use model as meta-model for next lower modelling level [20, 56] Arbitrary number of modelling levels [28, 48, 56] Define which language elements may be extended [18] Editor for model modification [29] Change types of instances at runtime [20] Syntax-directed editing that ensures legal models [8] Modify models [8, 18, 28, 29, 35, 48, 56, 76, 85] Adding an intrinsic feature to a model element is automatically propagated to instances [28] Model Deletion Remove elements from models [8, 20, 28] References Compatibility Import & Export Validation Tool Generation Code Generation & Templates References with their concrete type defined at lower level by using potency [46] References to elements in other languages [85] References between model elements [18, 20, 46 48] Compatibility to existing meta-modelling languages [28] Compatibility to load EMF models [56] Provide default serialization for abstract representation [26] Store data as file [85] Load and store models in a human-readable textual notation [48] Instance serialization as XMI [56] Define model-wide constraints [46 48] Define reusable constraints [48] Define constraints with Epsilon Object Language [18, 47, 48] Define constraints with Java [18, 48] Define a constraint s evaluation meta-level with potency [46, 48] Automatic model consistency check on model change [20] Define constraint templates [20] Define constraints for types [20] Support model validation and checking [61] Check for semantic errors [85] DSML implementations support validation [62] Define constraints with Object Constraint Language [56] Automatic editor generation for defined DSML [28, 29, 62] Automatically derive syntax highlighting for created DSLs [85] Definition template-based code generators with Epsilon Generation Language [18, 46, 47] Definition type-generic code generators with Epsilon Generation Language [47] Generate code following the JMI specification [48] Definition of code generators [26, 76, 85] Automatically execute code generation on save [85] continued on next page 16

33 2.2. Literature Review on Requirements of Multilevel Modeling Tools Define type-generic behavior with Epsilon Object Language [47] Define model transformations with the Epsilon Transformation Language [18, 47, 48] Define model transformations with ATLAS Transformation Language [56] Instantiate models with Epsilon Object Language [48] Transformation Define model behavior with Epsilon Object Language [47, 48] Define model behavior with Java [47] Allow definition of transformations between arbitrary languages [85] Allow definition of refactorings [85] Transform language defined with XSD to metamodel [62] Define and execute complex model modifications [8] Database for all kinds of modelling artifacts [79] Support for popular version control systems [85] Versioning Repository for metamodels [76, 81] Repository for models (instances) [8, 81] Provide diff and merge of abstract representation [26] Migrate legacy models to new version of language definition [8] Automatically propagate changes on metamodel in repository to models [76, 81] Automatically propagate changes on metamodel in repository to code generators [76] Migration Definition of migration rules [8] Manual or automatic application of migration rules on model repository [8] Automatically adapt derived constraints to changes of elements in its scope [20] Evolve a DSL and any code built in it together [26] Modification of a model must be propagated to all affected models on lower levels [28] Utility Undo and redo for all API calls [48] Extensibility [81] Flexibility [61] Quality Attributes Interoperability [61, 81] Scalability [61, 81] User-friendly interface [61] API for tool extension and modification [81] API API for CRUD operations on modelling artefacts [79] API for validation [79] API for model creation [48] Model Representation Model representation is about making the abstract in-memory version of a model available to the user by rendering it to a viewable format. One way to edit a language s abstract representation is using a projectional editor [26, 85]. Thereby, the editable abstract representation should be separated from the stored serialization. Furthermore, a language s abstract representation must be able to tolerate errors and ambiguities [26]. Projectional editing can be used to represent the same model in multiple ways by using different projections [85]. For instance, an editor can allow switching between multiple views of one model such as a diagram, matrix, or table [76]. Another option for model design is using a graphical editor [76]. Therefore, an UML-like notation [28] can be used. Similarly, references propose to execute modeling tasks in a visual representation [20] and visualize model elements with their types and relations [56]. Congruently, modeling tools should supply a mapping between a model s abstract- and graphical representation for graphical modeling languages 17

34 2. State of the Art [81]. Furthermore, such tools should provide the capability to define custom graphical representations for language elements [81]. In a multilevel context, additional requirements exist [28]. The tool needs to be able to represent a language s architecture including classes on different levels. One must be able to navigate through the modeling levels. Also, extensions of a class should be directly shown in affected classes on lower levels. Lastly, it is desirable to be able to the integrate of multiple DSMLs into one editor [28]. For instance, this could be done by creating multi-language diagrams [29]. An approach for the visualization of multilevel hierarchies is using separate diagrams for each modeling level and then having further diagrams which define the instantiation relationships between two levels [20]. In the context of textual languages, the syntax of models can be defined based on templates [18] Model Creation This category defines requirements related to the creation of models. Basic requirements are an editor for model creation [29] and the possibility to add elements to a model [8, 20]. One way to define a new DSL is defining it as a meta-model which includes the domain concepts and rules [76]. Congruently, two further references [8, 85] require the capability to define language elements and their legal configurations. To go one step further, Fowler [26] requires to define languages which can be integrated with one another. Also, languages could be created by inheriting from a base language and extending its concepts [85]. To allow multilevel modeling, support for an arbitrary number of modeling levels is required [28, 48, 56]. Thereby, models should be usable as meta-models for the next lower modeling levels [20, 56]. Because technical descriptions sometimes contain concepts from different modeling levels, classes of different levels must be allowed in the same model. Consequently, it is required that the modeling level of a class can be defined explicitly [28]. To not only allow instantiation of model elements at the next lower modeling level, potency can be used [18, 48, 79]. Potency allows to define in a meta-model how many times a model element (e.g. a class or attribute) has to be instantiated in lower levels before a value has to be assigned [48]. Another similar concept is called intrinsic feature. Model elements can be marked as intrinsic and an instantiation level assigned. Such a feature can only be instantiated on the specified level [28]. For higher modeling flexibility in a multilevel context, instances of a type should be extensible with additional attributes which then can be used in lower modeling levels [48]. But because extension should not always be allowed, it must be possible to limit the extension for specified model elements [18]. 18

35 2.2. Literature Review on Requirements of Multilevel Modeling Tools Model Update The Model Update category is defined by requirements defining the capabilities to edit models after they have been created. In general, it is necessary to be able to edit models [8, 18, 28, 29, 35, 48, 56, 76, 85]. For this, an editor should be provided [29]. Further improvements in this regard are ensuring that every editing step results in a legal model by providing syntax-directed editing [8] and providing support to change model elements types at runtime [20]. In the context of their multileveling framework metadepth which allows to extend language elements in lower levels, DeLara et al. [18] argue that it is necessary that language designers can restrict which language elements may be extended (e.g. with new fields). Also in a multileveling context, intrinsic features that are added to a model element, need to be automatically added to its instances as well. Thereby, an intrinsic feature can only be instantiated on the instantiation level specified in the feature (e.g. an attribute or association) [28] Model Deletion Requirements relating to the deletion of models or their elements are not discussed extensively in the analyzed references. Only [8, 20, 28] explicitly require the capability to remove elements from models References To reuse an element defined in a model in other parts of a model (e.g. as type of a field), the element needs to be referenced. Therefore, it should be possible to define references between model elements [18, 20, 46 48]. To compose languages from several domains, elements from other languages should also be referenceable [85]. In combination with multilevel modeling, assigning potency to the type of a reference allows defining the reference s concrete type at a lower level. This has the advantage, that the concrete instantiated type of the reference does not need to be known when defining the reference [46] Compatibility When designing a multilevel modeling framework, it should be possible to import existing DSMLs without restrictive effort. The reason for this is a high number of existing DSMLs in enterprise modeling [28]. For instance, this could be done by providing compatibility with EMF models [56] Import & Export In general, the import and export of model data is not discussed very detailed in the analyzed literature. Generally, it should be possible to store models as files [85]. 19

36 2. State of the Art For model instances, one possibility is using the XML Metadata Interchange (XMI) [56]. Another possibility is using a human-readable textual notation to load and store models [48]. As a more abstract challenge, Fowler [26] describes the need to provide a serialization that allows to store and load the abstract in-memory representation of a DSL Validation This category covers requirements related to the validation of models. Accordingly, tools should provide capabilities for checking and validation of models [61], checking for semantic errors [85], and supporting validation for DSML implementations [62]. Another requirement might be the automatic check of model consistency whenever the model is changed [20]. Constraints can be used to define valid models, for instance by defining type constraints which need to be fulfilled by all instances of a type [20]. Different technologies for defining constraints are suggested: Java [18, 48], the Epsilon Object Language (EOL) [18, 47, 48], or the Object Constraint Language (OCL) [56]. More specific challenges related to the definition of constraints are also defined. Demuth et al. [20] define constraint templates which allow to define generic constraints. These can be instantiated to concrete constraints for all elements with compatible types [20]. Another way of reusing constraints, is the definition of model-wide constraints [46 48]. Reusability is achieved by defining them once and then assigning them to multiple model elements [48]. To add flexibility in the evaluation of constraints by defining the evaluation meta-level of a constraint, potency can be assigned to constraints [46, 48]. For instance, this allows to define constraints that only have to be fulfilled two levels below their definition instead of the next lower level in the standard case Tool Generation Meta modeling tools should provide automatic editor generation for DSMLs defined with the tool [28, 29, 62]. The first reason for this is that a DSML needs an editor to be used effectively [28, 29]. Consequently, the creation or modification of DSMLs is only practical if the editor creation has reasonable cost [29]. Secondly, it is not feasible to expect domain experts that design local DSMLs to be able to implement an editor for their DSML [28]. Another tool generation aspect is the automatic derivation of syntax highlighting for text-based languages [85] Code Generation & Templates Code generation is used to transform the abstract representation of a language to a target language (e.g. C# or Java) to get an executable or compilable representation of the language [26, 85]. One example for this is using code generation that results in Java 20

37 2.2. Literature Review on Requirements of Multilevel Modeling Tools code compatible to the Java Metadata Interface (JMI) [48]. Thereby, the generation could be automatically executed when language changes are saved [85]. Three references [26, 76, 85] require definition of code generator(s) as part of a language definition. One way to define code generation is based on templates that are filled with concrete values from the model that the code is generated for [46, 47]. For instance, this can be done with the Epsilon Generation Language (EGL) [18, 46, 47]. A more advanced approach of this is the definition of type-generic code generators in EGL. They allow code generation for all types adhering to the generic templates prerequisites [47] Transformation Völter and Visser [85] describe transformations as mappings between different models. In this context, they introduce the tool capability to allow the definition of transformations between any desired languages. More specifically, refactorings can be defined for a language [85]. Similarly, Braatz and Brandt [8] require the definition and execution of complex model modifications. Concrete proposed technologies for the definition of model transformations are the Epsilon Transformation Language (ETL) [18, 47, 48] and the ATLAS Transformation language (ATL) [56]. A subset of model transformations is the definition of model behavior or in-place transformations [47, p. 5] which basically allow to execute a model [47]. To define model behavior, our references suggest the Epsilon Object Language (EOL) [47, 48] or Java [47]. To get reusable model behavior, EOL can also be used to define type-generic behavior [47]. This can be executed on all models adhering to the behavior s expected concepts. Furthermore, EOL can be used to populate models with instances [48]. A more specific usage of model transformations is creating meta-models by transforming a language defined with XML Schema Definition (XSD) [62] Versioning For software developers versioning of code is a is a day-to-day practice. This involves the safe storage of code artifacts and tracking of their editing history. Therefore, they often use version control systems (e.g. Git). Correspondingly, versioning could be used in language tools by integrating standard version control systems [85]. Other references simply suggest that there should be a repository for meta-models [76, 81] and instance models [8, 81]. Similarly, Van Mirlo et al. [79] talk about a repository for the storage of all modeling artifacts in their tool. To provide proper version control for DSLs, implementation of diff and merge should be implemented directly on a language s abstract representation [26]. 21

38 2. State of the Art Migration Migration is necessary for the evolution of languages [8]. Therefore, tools should be able to migrate models defined in a legacy version of a language to the language s current definition [8]. In this regard, Fowler [26] states the importance of a tool s capability to evolve a DSL and any code built in the DSL together [26]. One approach for the definition of migrations is the usage of migration rules. These rules can then be executed manually or automatically on a repository to migrate contained legacy models [8]. To take it one step further, changes on a meta-model in a repository should automatically be propagated to models [76, 81] and code generators [76] in the same repository. Similarly, in a multilevel modeling environment, the modification of a model must be propagated to all affected models on lower levels [28]. A more specific feature is suggested by Demuth et al. [20]: Derived constraints are automatically adapted to changes of model elements in its scope Utility Utility requirements describe functionality that is not necessarily needed but are practical for the user. Only one [48] of the analyzed references mentions utility functionality. They provide undo and redo for all API calls against their multileveling framework Quality Attributes Although not many references describe any quality attributes, we could extract five. Tools should support easy modification and extension [81]. Appropriately enough, tools should provide flexibility to facilitate fast adaption to new abstractions [61]. Furthermore, a tool should support interoperation with other tools and support standard protocols [61, 81]. To support possible increasing complexity in the future, scalability is necessary [61, 81]. Lastly, a tool s interface should be user-friendly [61] API An application programming interface (API) allows a developer to access exposed functionality of a tool programmatically. Consequently, this can be any functionality of the tool as long as it is made available publicly. Such an API can be useful to allow extending and modifying a tool without having access to the source code [81]. More related to modeling itself, Van Mierlo et al. [79] offer an API for CRUD operations on modeling artifacts as well as an API for validating them. Similarly, but less extensive, delara et al [48] offer a Java API for model creation. 22

39 2.3. Tool Analysis 2.3. Tool Analysis In this section, we analyze the implemented functionality of nine editors and derive functional requirements out of it. The literature review in section 2.2 focused on a rigorous analysis of the academic knowledge base to gather a wide collection of requirements. In addition to this, the tool analysis provides us with further requirements and an indication of practical relevance. Furthermore, the analysis of industry tools contributes more concrete requirements to enrich and specify the often more abstract requirements gathered from the literature review. This will help us to select and prioritize the requirements for our implementation in chapter 3. To get worthwhile results we follow a structured approach. First, we introduce the analyzed editors in subsection Then, we demonstrate the requirements significance by relating them to the categories determined in the rigorous literature review. Finally, we give a precise definition for each of the requirements in subsection Analyzed Editors In this subsection, we introduce the editors whose behavior and functionality we analyzed. Thereby, we selected editors for structured data because JSON is a data format for storing structured data as well. To get a better collection of generic requirements, we chose editors from different vendors and for multiple data formats. The editor selection was conducted in close cooperation with our industry partner. They have fundamental experience in the development of structured data editors as desktop as well as web applications. Furthermore, this allows us to consider more concrete requirements in agreement with their implementation goals in comparison to the unbiased requirements collection from the literature review EMF Editors The editors in this section are part of the Eclipse Modeling Framework (EMF) [73]. EMF is an open-source meta-modeling framework developed by the Eclipse Foundation 19. It provides an editor to develop models for structured data based on its base metamodel Ecore. Ecore itself is also an instance of Ecore. Hence, it is its own meta-model. EMF allows automatic generation of Java code and instance editors for created models. Instances created with EMF cannot be used as meta-classes for further modeling. Hence, EMF supports meta-modeling with two levels. For every meta-model, a generation model (genmodel) can be defined to customize generation parameters for code and editors, e.g. the Java packages generated classes are placed in [73]. The editors reviewed by us are part of version (Release: ). EMF is actively maintained. Its current version is (Release )

40 2. State of the Art EMF Model Editor (E 1). This editor allows the creation and editing of structured models based on the Ecore language [73]. EMF Genmodel Editor (E 2). This editor allows to edit the generation model for a model. The generation model specifies code generation parameters such as copyright notice, packages, class naming, labels, etc [73]. EMF Instance Editor (E 3). An instance editor is generated by the EMF framework from a model and a generation model. This editor allows to create and edit instances of its model [73] EMF Forms Editors The editors in this section are part of the open-source framework EMF Forms 21 which itself is a subcomponent of the open-source framework EMF Client Platform 22. EMF Forms provides generation of CRUD UIs based on an EMF model and an UI description (called view model). By using different renderers, the framework allows generation for the technology stacks JavaFX, Swing, SWT, and Web. EMF Forms is still actively developed. The editors reviewed by us are part of version EMF Forms Generic Editor (E 4). This editor allows to open and edit instances of arbitrary EMF models. It analyzes the instance s corresponding meta-model and uses the rendering engine of EMF Forms to create CRUD user interfaces for created objects in the instance model. EMF Forms Ecore Editor (E 5). This editor, like the EMF Model Editor, allows to create and edit structured meta-models based on the Ecore language. It is based on the previously introduced generic editor and adapted for instances of the Ecore Model XML Spy The editors in this section are part of the tool XML Spy which is a commercial tool developed by Altova. According to Altova it is the industry s best-selling XML editor for modeling, editing, transforming, and debugging XML-related technologies such as XML Schema, DTD, XSLT, XPath, or XQuery. Furthermore, it also contains tools for similar technologies such as JSON Schema, JSON, HTML, CSS, and more. For analyzes we used the free evaluation version 24 of the tool which is not restricted in its features besides being limited to 30 days of use

41 2.3. Tool Analysis XML Spy - XSD Editor (E 6). This editor is a graphical editor for the creation and editing of XML Schemata 25 in the XSD format. XML Schemata define the valid elements in XML files. Consequently, they can be used to validate an XML file s structure or suggest valid elements during their creation. XML Spy - JSON Schema Editor (E 7). of JSON Schemata. This editor is a graphical editor for the creation JSON Schema Editor (E 8) The JSON Schema Editor 26 is a tree-based open-source editor based on AngularJS that allows to create and edit JSON objects defined by a hard-coded JSON Schema. The editor s repository is owned by EclipseSource 27. The editor is not developed at the moment (latest commit: , last checked: ) JSON Forms Editor (E 9) The JSON Forms Editor 28 is owned by EclipseSource. The editor runs as a web application and allows to create UI schemata for JSON Forms based on a JSON Schema. Thereby, the JSON Schema can be created or modified simultaneously to editing the UI schema. A UI Schema defines a form generated from a JSON Schema by the JSON Forms framework. For our requirements collection, we used the online test instance 29 hosted by EclipseSource Tool Requirements In this section we describe the functional requirements derived from the editors of subsection Thereby, we mapped the requirements to the requirement categories determined in the literature review (see subsection 2.2.4). Table 2.3 shows an overview over the analyzed tool requirements. The table shows to which requirement category the requirements belong and by how many editors a requirement is fulfilled. Furthermore, it is shown which of the editors described in subsection satisfies which requirement. Below, we describe every one of the defined requirements Model Representation These requirements describe functionality related to displaying the editor s currently loaded model to the user. They map to the requirement category described in subsection

42 2. State of the Art Table 2.3.: The Analyzed Tool Requirements Mapped to the Requirement Categories Determined in the Literature Review Model Representation Count E 1 E 2 E 3 E 4 E 5 E 6 E 7 E 8 E 9 Element Containment Tree 9 X X X X X X X X X Element Grid 2 X X Element Hierarchy Hints 1 X Element Hierarchy Information 5 X X X X X Multiple Synchronized Views 2 X X Property Grouping 1 X Root Elements Overview 3 X X X Textual Model Representation 2 X X Model Creation Add Elements in Properties View 1 X Contextual Element Creation 9 X X X X X X X X X Dynamic Inst. Creation of Defined Types 2 X X Model Update Contextual Drag and Drop 9 X X X X X X X X X Edit Element Properties 9 X X X X X X X X X Element Name Refactoring 1 X Modify Related Schema 1 X Typed Property Editing 6 X X X X X X Model Deletion Element Deletion 9 X X X X X X X X X References Element Extraction and Reference 1 X References Between Elements 6 X X X X X X Show Element References 5 X X X X X Import & Export Export Model as Text 1 X Load and Edit Further Models 6 X X X X X X Load Data Schema from Github 1 X Load Data Schema by Upload 1 X Load Data Schema from URL 3 X X X Load Model from File 7 X X X X X X X Persist Edited Model as File 7 X X X X X X X Validation Automatic Validation 3 X X X Property Validation 5 X X X X X Property Validation Shown in Tree 1 X P Structural Instance Validation 4 X X X X Utility Copy, Cut, and Paste 6 X X X X X X Undo and Redo 7 X X X X X X X API Trigger External Operations on Model 3 X X X 26

43 2.3. Tool Analysis TR 1: Element Containment Tree The editor shows a containment tree with a tree element for every data object defined in the current model. This tree shows the elements containment hierarchy: The user can recognize which element contains which other elements in the model simply by looking at the tree (e.g. by indent and/or connecting lines). Furthermore, the tree allows to collapse and expand elements. If an element is collapsed, none of its contained elements are shown. If it is expanded, all of them are shown. Every element displays an identifying label of its associated data object. The concrete label displayed depends on the available information about the element, both from its properties and its definition in the meta-model. TR 2: Element Grid The editor shows an interactive grid with an element for every data object and for every property defined in the current model. Thereby, the contained children of an element as well as its properties are displayed as a sub-grid of their parent. A grid element can either be collapsed or expanded. When an element is collapsed, it only occupies one cell of the grid and at least the element s type is displayed. No properties or contained elements are shown. When an element is expanded from its collapsed state, the element s grid-cell expands and a new sub-grid is shown inside the cell. This sub-grid then contains the expanded element s properties and contained elements. This sub-grid itself again works like the previously described element grid. TR 3: Element Hierarchy Hints The editor explicitly shows the element hierarchy. For every element in the current model, the editor displays indicators that show which types of elements are legal contained children of the annotated element. TR 4: Element Hierarchy Information The editor is able to show an element type s inheritance hierarchy. Thereby, the inheritance hierarchy for all supertypes is shown, too. Furthermore, for every property of the analyzed type, the editor shows which type in the hierarchy defines the property. TR 5: Multiple Synchronized Views The editor supports multiple representations of the same model (e.g. element containment tree and textual representation). When the model is changed in one of the views, all other views are updated automatically to correctly represent the new data. 27

44 2. State of the Art TR 6: Property Grouping When displaying an element s properties, the editor groups them in basic and advanced properties. The grouping is visualized by placing properties of the same group adjacent to each other and dividing the groups from each other (e.g. by a border). TR 7: Root Elements Overview The editor displays an element for every data object at the highest level of the model. For every element, at least its type is shown. TR 8: Textual Model Representation The editor shows an editable textual representation of all data contained in the model. The editor allows to edit all data objects as well as their properties. Additionally, the model is serialized in such a way that the data objects containment hierarchy can be concluded unambiguously Model Creation These requirements describe functionality related to the creation of new models and model elements by the user. They map to the requirement category described in subsection TR 9: Add Elements in Properties View The editor allows to add new contained children inside the rendered properties view of the currently selected data object. Thereby, the created child can only be of a type allowed by the schema for the selected parent object. After its creation, the child is added to the model and an appropriate element is created and added to the model s representation. TR 10: Contextual Element Creation The editor allows to create new data objects as children of other data objects or the root object. Thereby, these objects can only be created at legal positions in the model as defined by the meta-model. Created data objects are automatically added to the model s representation. TR 11: Dynamic Instance Creation of Defined Types This requirement assumes that the current model defines some kind of data types (e.g. classes). If the selected type is instantiable, the editor allows to create a new instance of this type in a separate model. The separate model s data schema is the model defining the instantiated type. 28

45 2.3. Tool Analysis Model Update These requirements describe the editor s capabilities to modify existing data objects in the model. They map to the requirement category described in subsection TR 12: Contextual Drag and Drop The drag and drop functionality works on the elements displayed in the model s representation. The editor allows to drag elements and drop them only at other valid positions in the model. Thereby, the schema defines which positions are valid: Elements can only be dropped, if the new parent element can contain it as a child. When an element is moved, all its contained children are moved, too. The element s associated data object is moved to the new position in the model. TR 13: Edit Element Properties The editor allows to edit the properties values of all elements in the model. TR 14: Element Name Refactoring The editor allows to rename an element and automatically adapt all references to this element in the current model. TR 15: Modify Related Schema The editor allows to modify the current model s data schema. Thereby, elements can be added to and removed from the schema. TR 16: Typed Property Editing When editing the properties of an element, the editable representation of a property is adapted to its data type. Depending on the data type, this prevents the user from entering incorrect data. Example: A property of type date could set with a data picker. As a result, the chosen date automatically has the correct format Model Deletion This requirement describes functionality related to removing data from the model and maps to the requirement category described in subsection TR 17: Element Deletion The editor allows to delete elements from the model s representation. Thereby, the element and all its contained children are deleted from the representation and the model itself. 29

46 2. State of the Art References These requirements describe functionality related to defining references between different model elements. They map to the requirement category described in subsection TR 18: Element Extraction and Reference The editor allows to extract suitable model elements to their own model and then references it from the original model. The newly created model uses the same schema as the model the element has been extracted from. TR 19: References Between Elements The editor allows to create references between elements. This means that an element property can link another existing model element instead of containing the value directly. The elements can be from different loaded models. When creating a reference, the editor only allows to link elements whose type is compatible to the referencing property s type. TR 20: Show Element References The editor is able to show where a selected element is referenced in the current model or another (implicitly) loaded schema or model Import & Export These requirements describe functionality related to the loading and saving of data schemata and models from various sources. They map to the requirement category described in subsection TR 21: Export Model as Text The editor allows to serialize the current model to text and make the serialization available to the user. TR 22: Load and Edit Further Models In addition to the current model, the editor allows to load further models. These additional models can be edited in the editor as if they were the originally opened model. Furthermore, all loaded models, including the original one, can reference elements from the other models. 30

47 2.3. Tool Analysis TR 23: Load Data Schema from Github The editor allows to import a data schema from a public or private Github repository. To allow the loading from a private repository, the user can log in with her Github account and grant the corresponding permissions to the editor. TR 24: Load Data Schema by Upload Assuming that the editor runs as a web application, the editor allows to upload a schema from the user s device which she uses to access it. TR 25: Load Data Schema from URL The editor allows to import a data schema from an URL provided by the user. TR 26: Load Model from File The editor allows to load a model from a file saved on the user s device executing the editor. TR 27: Persist Edited Model as File The editor allows to serialize the current model and save it in a file on the user s device Validation These requirements describe functionality related to validating that a model adheres to its specification. They map to the requirement category described in subsection TR 28: Automatic Validation Whenever a model element is changed, the editor automatically re-validates the model containing the modified element. TR 29: Property Validation The editor validates the non-containment properties of all model elements against the meta-model. Thereby, the editor checks for every property if its restrictions (e.g. that a string is not empty) defined in the schema are fulfilled. Validation errors are marked on the validated property s visual representation. TR 30: Property Validation Shown in Tree When the validation of a property results in an error, the editor visually marks the element, which contains the property, in the model s element containment tree. This 31

48 2. State of the Art requirement assumes that the editor fulfills the Element Containment Tree requirement (see TR 1). TR 31: Structural Instance Validation The editor validates the structure of the current model against the meta-model. Thereby, the editor checks whether elements adhere to containment constraints and whether required references are set. Containment constraints define the multiplicity and the type of a type s containment property. If the validation finds any errors, they are reported and the invalid elements are marked in the model s representation Utility These requirements describe utility functions that improve the editor s general usage experience. They map to the requirement category described in subsection TR 32: Copy, Cut, and Paste The editor allows to copy, cut, and paste model elements. Whenever, an element is copied or cut, the element is copied to the clipboard with all its properties and contained child elements. If the element was cut, it and all its children are removed from the model. When an element is pasted from the clipboard, it is added to the model as a child of the element that it has been pasted on. Thereby, the pasted element s contained children. TR 33: Undo and Redo The editor is able to undo any user action that changed the current model. This means the model is reverted to the state it was in before the action was executed. For instance, the user deletes an element A from the model. If the user then uses the undo function, the element A is part of the model again as it was never deleted. Furthermore, the editor is able to redo undone actions. Concretely, any action that has been undone by the user, can also be redone. After the redo functionality was executed, the model is in the state, as if the redo was never executed. For instance, if the redo function is used after the undo action in the example above, element A is removed from the model again API This requirement describes functionality related to accessing and using an interface to the editor s status and data. They map to the requirement category described in subsection

49 2.3. Tool Analysis TR 34: Trigger External Operations on Model The editor allows to use the model as a parameter for external operations. Thereby, the operations can be executed directly from within the editor without the need to explicitly start the called programs. These operation can simply read the model s content, e.g. for code generation, or transform the model itself. 33

50

51 3. Requirements In this chapter, we determine the requirements for our implementation of the modeldriven JSON editor. The whole set of requirements consists of the editor s functional requirements (FRs), implementation constraints (ICs), and quality attributes (QAs). To select the relevant requirements and implementation constraints, we conduct an expert interview in cooperation with our industry partner EclipseSource München GmbH 30. We interview the company s senior software architects. In the expert interview, we select the functional requirements based on our analysis of the state of the art in chapter 2 and assign one of the three priorities described in section 3.1. Furthermore, we link them to their source requirements in chapter 2 and describe the differences to their sources. The resulting functional requirements are categorized by priority and described in section 3.2. The implementation constraints are not directly related to existing requirements from the state of the art. They are described in section Priorities In this section, we describe the three priorities which we use to define the importance of the functional requirements 1 Must Have Requirements with this priority are essential for the proper usage and functioning of the model-driven editor. The editor is not considered finished until they are implemented. 2 Desirable Requirements with this priority represent functionality that greatly improves the editor s quality and its applicability to different contexts. However, in comparison to requirements of priority 1, the editor still provides acceptable results without implementing any of them. 3 Nice To Have Requirements with this priority improve the editor s overall quality and applicability while not being necessary for satisfying results. Implementation of these requirements is anticipated if there is leftover time, the implementation takes little effort, or comes with little additional time investment when implementing a requirement of the priorities 1 or

52 3. Requirements 3.2. Functional Requirements In this section, we describe the selected functional requirements for the editor divided into their priorities. Thereby, we explain on which requirements from chapter 2 they are based and with which requirements they are associated Must Have This subsection describes the editor s must-have requirements sorted alphabetically. FR 1: Contextual Drag and Drop This requirement takes its content from tool requirement 12 of the same name. Additionally, the drag and drop is only needed inside the editor s element containment tree (see FR 4). This requirement is associated with the literature review s category Model Update (see ) and contributes to fulfilling its requirements Modify models and Editor for model modification. FR 2: Contextual Element Creation This requirement is equal to tool requirement 10 of the same name. It is associated with the literature review s category Model Creation (see ) and its requirement Add elements to model. FR 3: Edit Element Properties This requirement is equal to tool requirement 13 of the same name. It is associated with the literature review s category Model Update (see ) and contributes to fulfilling its requirements Modify models and Editor for model modification. FR 4: Element Containment Tree This requirement is based on tool requirement 1 of the same name. In distinction to the basis, we do not require capabilities for the expansion and collapsing of tree elements. Furthermore, while the tree must be capable of displaying labels and icons, their origin is not specified in this requirement but in FRs 14 and 15. This requirement is associated with the literature review s category Model Representation (see ) and its requirement Edit a DSL s abstract representations through a projectional editor. The latter is the case because a model s in-memory containment hierarchy is projected as a tree. FR 5: Export Model as JSON-encoded Text This requirement is based on tool requirement 21 Export Model as Text. Additionally, 36

53 3.2. Functional Requirements we require that the exported text is encoded in JSON format. This requirement is associated with the literature review s category Import & Export (see ). It fulfills its requirements Provide default serialization for abstract representation and the store part of Load and store models in a human-readable textual notation. FR 6: External Interface to Model Similarly to tool requirement 34 Trigger External Operations on Model, the editor provides a public interface to the currently edited model. This allows to read and set the model. This requirement is associated with the literature review s category API (see ). FR 7: Load Model from JSON File This requirement is based on tool requirement 26 Load Model from File. Additionally, we specify that the loaded data must be in JSON format. This requirement is associated with the literature review s category Import & Export (see ). It fulfills the load part of the requirement Load and store models in a human-readable textual notation. Furthermore, it is associated with the requirement Provide default serialization for abstract representation as it allows using the default serialization again. FR 8: Property Validation This requirement is equal to tool requirement 29 of the same name. It is associated with the literature review s category Validation (see ) and its requirements Support model validation and checking and Check for semantic errors. FR 9: References Between Elements This requirement is equal to tool requirement 19 of the same name. It is associated with the literature review s category References (see ) and fulfills its requirement References between model elements. FR 10: Remove Elements from Model This requirement is the same as tool requirement 17 of the same name. It is associated with the literature review s category Model Deletion and fulfills its requirement Remove elements from models. FR 11: Save Edited Model as JSON File This requirement is based on tool requirement 27 Persist Edited Model as File. Additionally, we specify that the file s data is encoded as JSON. This requirement is associated 37

54 3. Requirements with the literature review s category Import & Export (see ). It fulfills its requirements Provide default serialization for abstract representation and the store part of Load and store models in a human-readable textual notation. FR 12: Typed Property Editing This requirement is equal to tool requirement 16 of the same name. It extends the capabilities described in FR 3. It is associated with the literature review s category Model Update (see ) and contributes to fulfilling its requirements Modify models, Editor for model modification, and Syntax-directed editing that ensures legal models. The latter is facilitated by limiting the input of property values to legal choices. For instance, rendering a boolean property as a checkbox naturally only allows to select true or false as its value Desirable This subsection describes the editor s desirable requirements sorted alphabetically. FR 13: Automatic Property Validation This requirement is equal to tool requirement 28 of the same name. It is associated with the literature review s category Validation (see ) and its requirement Automatic model consistency check on model change. FR 14: Definition of an Element Type s Label Property The editor allows configuring for every data type which of its properties is used as its label. Thereby, this label is displayed in the element containment tree (FR 4) as part of the tree node representing an instance of the data type. Whenever, the configured label property changes, the label is automatically updated (see FR 18). This requirement is associated with the literature review s category Model Representation (see ) FR 15: Definition of Icons for Element Types The editor allows to configure icons for data types. These icons are displayed in the element containment tree (FR 4) next to the tree node representing an instance of the data type. This requirement is associated with the literature review s category Model Representation (see ) and its requirement Definition of graphical representation for language elements. FR 16: Duplicate Elements The editor allows to duplicate an element and inserts the duplicate as a sibling of the original element. Thereby, the editor only allows the duplication if the insertion is legal. 38

55 3.2. Functional Requirements This means the original element must fulfill one of the following conditions. It is part of a multi containment property. Or it is a root element and multiple root elements are allowed in the model. There is no directly equivalent tool requirement. However, it is related to tool requirement 32 Copy, Cut, and Paste because a duplication is essentially copy and immediate paste at the copy location. Therefore, this requirement is associated with the literature review s categories Model Creation (see ) and Utility (see ). FR 17: Property Validation Shown in Tree This requirement is the same as tool requirement 30 of the same name. It is associated with the literature review s category Validation (see ). FR 18: Synchronize Element Containment Tree with Detail View If relevant, the editor automatically synchronizes changes made to an element in its detail representation with the element containment tree. For instance, the detail view could allow adding an element to a containment property. Then the added element should automatically be represented in the tree. This requirement is associated with the literature review s category Model Representation (see ) Nice To Have This subsection describes the editor s nice to have requirements sorted alphabetically. FR 19: Copy and Paste Elements This requirement is based on tool requirement 32 Copy, Cut, and Paste. The difference to the basis is that this requirement does not require the capability of cutting elements. It is associated with the literature review s category Utility (see ). FR 20: Cut and Paste Elements This requirement is based on tool requirement 32 Copy, Cut, and Paste. The difference to the basis is that this requirement does not require the capability of copying elements. It is associated with the literature review s category Utility (see ). FR 21: Structural Instance Validation This requirement is equal to tool requirement 31 of the same name. It is associated with the literature review s category Validation (see ) and contributes to fulfilling its requirements Support model validation and checking and Check for semantic errors. 39

56 3. Requirements FR 22: Undo and Redo This requirement is equal to tool requirement 33 of the same name. It is associated with the literature review s category Utility (see ) and is related to its requirement Undo and Redo for all API calls. While our requirement is not specific to API calls, both are about undoing and redoing changes to the model Implementation Constraints In this section, we define the editor framework s implementation constraints. As they all are mandatory to follow, no priorities are assigned. IC 1: Implementation in Typescript The implementation of the editor framework is done in Typescript which is a typed superset of JavaScript that compiles to plain JavaScript [60]. Modern web applications are usually implemented in JavaScript. However, JavaScript has the disadvantage of being a completely dynamically typed language. Consequently, static type checking is not possible in JavaScript. The problem with this is that many modern software development tools do not work properly without static type checking. For instance, static type checking allows code navigation to types of variables, statement completion for variable and method names, and safe refactorings. Furthermore, type errors can only be known at runtime without static type checking [33]. Typescript solves this by offering the optional declaration of types and intelligent type interference [33, 60]. The second advantage of using TypeScript over JavaScript lies in its transpilation 31 process. This process allows transforming the typed TypeScript code to plain JavaScript code which can run anywhere where JavaScript is executable, e.g. web browsers or Node.JS. This allows using features of new ECMAScript (ES) 32 specifications during development (e.g. like classes or maps) and transpile the code to JavaScript compatible with older ES versions. The advantage of this is that the feature gap between the current specification and the actual support in browsers and other tools can be mitigated [33]. Furthermore, the compatibility level of the generated JavaScript code can simply be specified in the TypeScript configuration. This approach is visualized in Figure 3.1. IC 2: JSON as Model Serialization Format In conformity with the functional requirements FR 5, FR 7, and FR 11, models edited in the editor are serialized and deserialized in the JSON format. JSON is a wide-spread, 31 Transpilation is defined as source-to-source compilation. This means source code of one language is compiled to source code of another language. 32 ECMAScript is the standard defining the JavaScript language. It is managed by Ecma International. 40

57 3.3. Implementation Constraints TS Config TS Code TypeScript Compiler ES3 ES5 ES6 Figure 3.1.: TypesScript Transpilation Overview. Depending on the configuration, the TypeScript compiler compiles the TypeScript source code to JavaScript compatible to different versions of ECMAScript (ES). platform independent and human-readable data interchange format. Additionally, it integrates smoothly with JavaScript and thereby Typescript (see IC 1). IC 3: JSON Schema to Define Data Models JSON Schema 33 is used to define the meta-models for our editor framework. JSON Schema is an internet draft that allows defining how a JSON document is supposed to look like. Concretely, it defines which elements may exist, which hierarchy they may have, and which properties they can contain. Furthermore, additional constraints on the data can be defined. For instance, mandatory properties, the minimum value of a number property, or a regex pattern that a string property must satisfy [23]. Thereby, JSON Schemata are defined in the JSON format themselves. Furthermore, JSON Schema is its own meta-model and consequently defined in terms of itself. This offers a crucial conceptional advantage. Because the meta-model defining JSON Schema is a JSON Schema itself and JSON Schemata are defined in JSON, the editor framework can create an editor based on the meta-model that allows creating new JSON Schemata. These schemata can then be used with the same framework to create an editor for the just defined data. In conclusion, using JSON Schema to define the editor s meta-models allows us to define new meta-models which can then again be used to generate a new editor

58 3. Requirements 3.4. Quality Attributes In this section, we describe the quality attributes that we want to realize in the editor framework s architecture, design, and implementation. QA 1: Configurability The editor framework should provide extensive options to configure generated editors. The reason for this is that the framework allows editor generation for all kinds of data. Different meta-models have different requirements for the display and editing of their instances. Possible configuration options include labels (see FR 14), icons (see FR 15), and the rendering of element properties. Related to the latter, desirable configurations include, for instance, which properties are shown or whether they are read-only. QA 2: Extensibility The editor framework should be easily extensible with new functionality. This especially considers two aspects. Internal extensibility: It should be easy to add or change functionality related to the element containment tree (FR 4) or editing the elements properties. External extensibility: It should be easy adding functionality related to the editor framework s and the generated editor s external interfaces. This includes extending configuration options as well as the loading and saving of models from respectively to different sources (e.g. load model from URL). QA 3: Integratability The editor framework, as well as generated editors, should be easily integratable in other applications using compatible technologies. In conformity with QA 1, this especially requires that the framework provides simple access to the configuration options in an integration scenario. 42

59 4. Implementation In this chapter, we describe the architecture, design, and implementation of the editor framework. We start with the architecture in section 4.1. Thereby, we introduce the different components of the editor framework. In the following sections we describe the design and implementation of the framework s components in sections 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, and 4.8. We justify our design decisions with the requirements established in chapter 3. In section 4.9, we give a detailed explanation of the reference mechanism which allows to reference data in- and outside the data of a generated editor. Finally, we specify the customization options for an editor and illustrate their usage in section Architecture In this section, we describe the conceptional architecture of the editor framework. Thereby, we first give an overview over all components in this section and then describe every component in a separate section. Figure 4.1 shows the architecture as a UML component diagram. The entry point of the framework is the EditorRenderer component. It provides three interfaces to configure the rendered editor. The schema interface allows to set and read the JSON Schema defining the generated editor s data. The data interface allows to set and read the data currently displayed in the editor. The configuration interface allows customizing additional properties of the editor, such as icons and labels shown in the containment tree. All customization options are described in detail in section These interfaces can be accessed by Services to extend the editor s behavior with additional functionality. Thereby, the Services component in the diagram represents an arbitrary number of services which can each use one or more of the provided interfaces. Because these interfaces are publicly available, a service can be added without any need to edit the editor framework s source code. Consequently, this greatly contributes to satisfying the extensibility quality attribute (see QA 2). In order to render the editor, the EditorRenderer requires two interfaces. The first one is rawschema. This is used to provide the JSON Schema defining the data editable in the editor. The second one is rendercontainmenttree. This is used to render the element containment tree (see FR 4) of the data currently edited in the editor. Both interfaces are provided by subcomponents of the JsonForms2 component. The Parser takes a plain JSON Schema, analyzes it for defined types and properties, organizes the gained information in a practical data structure and makes the results available for other components via its analyzedschema interface. The TreeRenderer offers the rendercontainmenttree interface. It renders an element containment tree for data defined 43

60 4. Implementation Figure 4.1.: UML Component Diagram: The Editor Framework s Conceptional Architecture 44

61 4.2. Editor Renderer by a schema. To achieve this, it consumes the analyzedschema to get the necessary information to render the tree. In order to render the properties of an element selected in the tree, the TreeRenderer consumes the renderobjectproperties interface of the DetailRendering component. To validate the values of a rendered element s properties, it uses the validateproperties interface of the Validation component. This component uses constraint information from the JSON Schema to determine which properties contain legal values Editor Renderer The EditorRenderer component is the entry point for creating an editor for a JSON Schema. It is implemented by one class with the name JsonEditor. This class creates and configures a JsonFormsElement (see subsection 4.4.1) to render the element containment tree according to the configured JSON Schema. The JsonEditor class provides all methods needed to configure and customize an editor: congure This method allows to configure all possible customizations with a single configuration object. For more details see subsection data This property allows to read and set the data currently visualized in the editor. Setting it triggers a re-rendering of the containment tree. schema This property allows to read and set the JSON Schema describing the editor s data. Setting it triggers a re-rendering of the containment tree. setimagemapping This method allows to set a mapping that configures an icon that is shown for all data elements of a type in the containment tree. For more details see subsection setlabelmapping This method allows to set a mapping that configures which property of a data element is used as its label in the containment tree. For more details see subsection setmodelmapping This method allows to set a mapping that enables inferring a data element s type from one of its properties. For more details see subsection registerdetailschema This method allows to register a UI Schema (see subsection ) for an element type. This UI Schema defines the rendered form of applicable data elements when they are selected in the containment tree. registerresource This method allows to register a resource to the editor s resource set. Registered resources can be used to reference data outside the editor s data (see section 4.9). The JsonEditor manages the correct configuration of these settings in JSONForms 2. Consequently, a user of the editor framework does not need to worry how to 45

62 4. Implementation do these configurations himself. This leads to an increased usability of the system. Furthermore, providing these customizations contributes to fulfilling the configurability quality attribute (QA 1). Custom Element The JsonEditor is implemented as a HTML Custom Element 34. As the name suggests, this technology allows to define new HTML elements identified by a unique name. This is done by extending the class HTMLElement of JavaScript s DOM API and registering the class with the element s name as a custom element. This has multiple advantages. A custom element is usable like a normal HTML element. It can access the whole DOM API and you can register any type of event listener (e.g. a click listener). Furthermore, the custom element can react to lifecycle events, like being added or removed to respectively from a document. Also, a custom element can be created by simply creating a HTML tag with the specified name [6]. In our case, this allows to insert a JsonEditor in a web application by simply adding a <json-editor> tag at the desired position. Consequently, the editor framework can be embedded in other web applications with little effort. This greatly increases the integratability of our framework and thereby allows us to fulfill quality attribute QA Services The Services component stands for an arbitrary number of services that extend the editor framework s functionality. To achieve this, services can use all provided methods of the editor framework introduced in section 4.2. This allows for flexible addition and removal of functionality from generated editors, even during their runtime. We selected this approach in order to increase the extensibility of our system as required by quality attribute QA 2. We provide the following three services ourselves which can optionally be used when using a generated editor. Export Data Dialog This service can be instantiated for any HTMLElement. When the element is clicked, the service opens a dialog that displays a multi-line text area. This text area contains the editor s current data as JSON. Furthermore, the dialog contains a button that allows to copy the serialized data to the user s clipboard. This service satisfies functional requirement FR 5. Load Data File This service can be instantiated for any HTMLElement. When the element is clicked, the service opens a native file open dialog that allows to open a file. Then, the service reads in the file and validates whether it contains valid JSON. If yes, the service validates the JSON against the editor s JSON schema. If the validation is successful, the loaded data is set as the editor s data. This satisfies functional requirement FR

63 4.4. JsonForms 2 Save Data File This service can be instantiated for any HTMLElement. When the element is clicked, the service opens a native file save dialog that allows to save the editor s current data as a JSON file. This satisfies functional requirement FR JsonForms 2 In this section, we describe the JsonForms2 component. It is implemented by the opensource framework JSONForms 2 35 which is licensed under the MIT license. In order to not re-invent the wheel and write a property and object rendering framework ourselves, we decided to base the editor generation on JSONForms 2. As already indicated in section 4.1, we use the framework for rendering the element containment tree (see FR 4). Furthermore, JSONForms 2 allows rendering properties of an element. As JSONForms 2 did not provide all the functionality we need to fulfill our requirements, we extended it and committed the added functionality to the framework. Consequently, our developed functionality also is available as open source software. To use JSONForms 2 for our framework, we had to make sure that it matches with our implementation constraints. JSONForms 2 uses JSON Schema to define the properties of rendered elements as well as validation constraints of these properties. Consequently, it is compatible with IC 3 that requires JSON Schema to be used as the data models definition format. As a consequence, JSONForms 2 is well-suited to edit data in the JSON format. While it does not offer functionality to import or export JSON per se, it makes the currently edited data available as a JavaScript object based on the defining JSON Schema. This allows us to simply read the edited data and serialize it to JSON in the EditorRenderer component. Analogous, we can set the edited data after de-serializing it from JSON. As a consequence, we are able to adhere to implementation constraint IC 2 that requires using JSON as the editor s data input and output format. Finally, JSONForms 2 is implemented in TypeScript. This allows us to contribute our extensions in TypeScript and access the framework s methods in a typed way when using it in the EditorRenderer. Therefore, we can fully adhere to implementation constraint IC 1 which mandates the editor framework s implementation using TypeScript. JSONForms 2 provides two main entries for providing additional functionality without editing its source code. First, services can be registered to the framework that interact with its current state. And second, additional renderers can be added that allow rendering properties of certain types in a custom way. For instance, one could provide an address renderer that allows sending an by clicking on the address. These mechanisms match well with quality attribute QA 2 Extensibility. Furthermore, JSONForms 2 allows registering UI Schemata (see subsection ) which define how a data object is rendered. Making this mechanism available in our EditorRenderer contributes to satisfying quality attribute QA 1 Configurability

64 4. Implementation Design In this subsection, we explain the design of JSONForms 2. It is shown as a UML class diagram in Figure 4.2. The entry point for rendering a data object with JSONForms 2 is the JsonFormsElement class. It contains the dataobject to render and the dataschema defining it. The JsonFormsElement has an arbitrary number of JsonFormsServices. These provide various functionality that is not related to the rendering process. However, they interact with the rendered form. Currently, JSONForms 2 implements two of these services. The ValidationService validates the rendered dataobject s properties against the dataschema and, if possible, shows the validation status on the rendered controls of the affected properties. The RuleService evaluates and applies rules defined for rendered controls. Rules are defined in the UI Schema and may be attached to any UI Schema element (for a detailed description see subsection ). For instance, a rule could define if a control is shown or hidden based on the value of a specified property. To access services needed for the actual rendering of its dataobject, the JsonFormsElement uses the JsonForms class. It contains static references to these services. The UISchemaRegistry knows all UI Schemata registered to the JSONForms 2 instance. It provides functionality to get the best fitting UI Schema for a given data schema and data object. Furthermore, if there is no suitable UI Schema registered, it can generate a default one. The SchemaService fulfills the role of the schema parser. It processes a JSON Schema and makes the results available (see the detailed description in section 4.7). The RendererService knows all different renderers registered to JSON Forms 2. Accordingly, it offers a public method to get the most applicable Renderer for a given UI Schema Element describing how to render the data defined by the given JSON Schema. Additionally, the JsonForms class provides access to the resources registered to the system via a ResourceSet. This is used for referencing external data (see section 4.9). The abstract Renderer is the base class for all renderer implementations. Mainly, there exist two types of renderers in JSONForms 2. First, LayoutRenderers do not directly display any property but contain further child elements which are rendered according to the layout implemented by the renderer. For instance, the simplest of the layouts is the vertical layout renderer. It renders all contained properties beneath one another. Another example is the group layout. It behaves like the vertical layout but additionally displays a label and a border around the contained elements. The second major type of renderers are PropertyRenderers. A PropertyRenderer usually creates a control with a label for a property of a certain type. The rendered control allows editing the property s value in conformity with its type. For instance, JSONForms 2 provides a renderer for boolean values that displays them as a checkbox. Finally, a special Renderer is the TreeMasterDetailRenderer. It renders a hierarchical containment tree of the data objects contained in the given root data object. It uses the JsonFormsElement to render the properties of the currently selected element. Furthermore, the SchemaService is used to get the needed information of the root data s structure to render the tree. A more detailed description is given in section

65 4.4. JsonForms 2 Figure 4.2.: UML Class Diagram: JSONForms 2 Design 49

66 4. Implementation Rendering Process In this subsection, we describe in more detail how JSONForms 2 renders a data object defined by a schema. This is visualized in Figure 4.3. The rendering process is initiated by a User wanting to render a data object whose structure is defined by a JSON Schema. Thereby, the User can be any system that can execute JavaScript. As a first step, it creates a new JsonFormsElement and sets its schema and data object in steps two and three. The latter triggers the rendering process by calling the JsonFormsElement s render method in step four. In order to render the data object, the JsonFormsElement needs a UI Schema describing the form to render. To get the UI Schema best suited for the JSON Schema defining the data, the JsonFormsElement calls the findmostapplicableuischema method on the UISchemaRegistry in step five. The UISchemaRegistry then matches the most applicable schema. If none can be found, a default UI Schema is generated and returned in step six. Next, the JsonFormsElement needs the appropriate renderer for the UI Schema. To find the renderer, in step seven, the JsonFormsElement calls the RendererService s findmostapplicablerenderer method. It is called for the UI Schema and the configured JSON Schema and data object from steps two and three. The RendererService determines the best suited renderer. In step eight, it creates a new instance of the renderer and configures it with the parameters gotten in step seven. In this example case, the created renderer is a VerticalRenderer which renders a vertical layout. After the VerticalRenderer has been returned back to the JsonFormsElement, the renderer s render method is called in step nine. After the rendering of the VerticalRenderer is finished, the rendering process is complete and the form is shown. The detailed rendering loop of the VerticalRenderer is visualized in Figure 4.4. In essence, this rendering loop is the basis of every layout renderer. To render all elements defined in the UI Schema rendered by the VerticalRenderer, it loops over all sub-elements of the UI Schema. These sub-elements are UI Schemata themselves. Consequently, the rendering process is recursive. In step one, the VerticalRenderer wants to get the most applicable renderer for the current UI Schema element. Therefore, it calls the RendererService s findmostapplicablerenderer method. Like before, the RendererService determines the best renderer and creates a new instance in step two. Subsequently, it configures the created Renderer instance with the current UI Schema element, the data object, and the JSON Schema in step three. Afterwards, the renderer is returned to the VerticalRenderer which then calls the render method in step four. Finally, the loop restarts with the next UI Schema element as long as another one is present. 50

67 4.4. JsonForms 2 Figure 4.3.: UML Sequence Diagram: Json Forms 2 Rendering Process 51

68 4. Implementation Figure 4.4.: UML Sequence Diagram: Rendering Loop 52

69 4.5. Tree Renderer 4.5. Tree Renderer The TreeRenderer component is responsible for rendering the element containment tree of the data elements present in the editor s data. It is implemented by the TreeMasterDetailRenderer class and an additional module which provides drag and drop functionality (see subsection 4.5.1). The rendering of the tree is conducted by expanding the editor s data recursively by executing the following steps starting at the root data object. 1. Create a new list element for the current data object. Set its icon and label based on the editor s configured image- and label mappings (see and ). 2. Retrieve all ContainmentProperties (see 4.7.1) from the SchemaService based on the schema describing the current data object. 3. Create the corresponding child lists based on these properties and configure an add a button which allows to add new data objects. The createable elements are defined by the current object s containment properties. 4. Create a delete button that allows to remove the current list item and its associated data object from the containment tree respectively the containing data object. When the current element is removed, all its child elements are deleted, too. 5. Render the current data object s child objects in their corresponding list. This is done by recursively executing this algorithm for every one of the child objects with their corresponding parent list as a parameter. 6. Add the created list element to the given parent list. As indicated above, the add button of a tree element only allows to create new child elements which are specified by the tree element s containment properties. Consequently, the editor only allows to create new elements which are legal children of the parent element. This fulfills the functional requirement FR 2. The delete button fulfills the functional requirement FR 10. When a tree element is selected by the user, the TreeMasterDetailRenderer creates a new JsonFormsElement to create a detail view with the element s properties. The tree was implemented in plain HTML5 without additional frameworks for binding the tree elements to the data objects or creating the tree itself. This was done to keep the tree independent of the development of other frameworks. The reason for this is that the tree is intended as a component with a long life cycle. Therefore, the dependency on the development of foreign frameworks should be as low as possible. Furthermore, this allows extending the tree s functionality without being limited by the APIs of used frameworks. This concurs with our extensibility quality attribute QA 2. 53

70 4. Implementation Drag and Drop The goal of implementing drag and drop in the tree is to allow moving elements of the containment tree to new valid locations. This includes three aspects. First, the order of elements inside one containment list can be rearranged. Second, a tree element can be moved to a new parent. Thereby, it must be guaranteed that the new containment can contain the moved element. Third, when a tree element is moved successfully, the editor s data must be adapted to match the tree s structure. Implementing this behavior allows us to fulfill functional requirement FR 1. The main challenge to implement the aforementioned functionality is to determine whether a dragged tree element can be dropped at the current target location. As JavaScript is a dynamically typed language, the type of a tree element s data object cannot be inferred easily during runtime. However, a tree node needs to know the type of its data to determine where it can be dropped in the containment tree. To achieve this, we keep a map in the TreeRenderer that maps from a tree element to an information object which contains the tree element s represented data, the schema describing this data, and a function to delete the data from the model. Keeping this mapping for every element contained in the tree allows us to access a dragged element s data and type. The second part of identifying whether a containment list is a valid target is to determine whether the list is compatible with the tree element data s type. To achieve this, we annotate every list with the schema ids which identify the types of possible contained data objects. When a tree element is dropped at a containment list, the element s schema is retrieved from the map and matched with the list s allowed ids. If the drop is allowed, the element is added to the list and the corresponding data object is moved. To not re-invent the wheel, we use the open-source framework Sortable 36. The framework is actively maintained, is built with HTML5 s native drag and drop API, and provides many customization options including registering custom handlers for various drag and drop events. Furthermore, the framework integrates well with our implementation of the containment tree. The framework works list-based and drag and drop is configured for a list which activates drag and drop for all of its children. Furthermore, the framework allows exchanging elements between multiple lists that were configured with the framework. Consequently, we can provide drag and drop for the whole tree by configuring it for every containment list in the tree. Additionally, we configure custom handlers which react to the dragging and dropping of tree elements. Thereby, these handlers validate that a tree element can only be dropped at a legal target with the mechanism described in the previous paragraph. When a legal drop occurs, the framework handles moving the tree element to the new list. The custom handlers then add the tree element s associated data to the containment represented by the new list and use the delete function to remove the data from its old parent. In conclusion, Sortable provides the drag and drop functionality itself while we handle that elements can only be dropped at legal locations as well as moving the underlying model data

71 4.6. Detail Rendering 4.6. Detail Rendering The DetailRendering component is responsible for rendering data elements which are selected in the element containment tree. The component is implemented by recursively using the JsonFormsElement class (see Figure 4.2). Thereby, the JsonFormsElement is configured with the data element currently selected in the tree and the JSON Schema describing the element s type. When rendering a data element, the JsonFormsElement uses a registered UI Schema fitting for the schema defining the data. Therefore, the rendered properties view can be configured. As already indicated in subsection 4.4.1, JSONForms 2 provides various renderers to render properties according to their type. Examples are a boolean renderer that displays boolean values as a checkbox, an enum renderer which provides a combo box with the possible values, or a text renderer which shows a text field. Consequently, the implementation of the DetailRendering component allows to edit properties in controls suitable to their types. As a result, it satisfies the functional requirements FR 3 and FR 12. Limitations While there is no conceptual limitation in JSONForms 2 which features of JSON Schema could be rendered, not every case is covered by a sophisticated renderer. There is a basic renderer that can render any property in a simple text field if a Control specifies the property in the UI Schema. However, this does not allow proper editing of non-string properties. Therefore, we do not count this as providing proper rendering. The following JSON Schema features are not properly rendered: Any property definition using the keywords anyof, allof, not, or oneof Properties whose type is provided as an array instead of a single type. Enums whose enumerated values are of multiple different types Enums with non-primitive values Arrays whose items are tuples Arrays of primitive values 4.7. Parser The Parser component is responsible for providing functionality related to analyzing JSON Schemata. Thereby, the Parser must provide the following functionality: Get contained data objects of the analyzed type Get references of the analyzed type Get self-contained sub-schemata types defined in the editor s root schema 55

72 4. Implementation Figure 4.5.: UML Class Diagram: Schema Service and Properties This functionality must be easily accessible for all data types defined in the editor s root schema. Additionally, the functionality should be accessible independent of each other and return easily usable results that do not need traversing big data structures. Furthermore, the Parser component should provide good testability and extensibility. The latter is important because additional information might need extraction in the future. Based on these design decisions, we implement the Parser component as a lightweight SchemaService which is globally accessible. The class diagram in Figure 4.5 shows the design of the SchemaService and its associated classes. We explain them and their offered functionality in the following paragraphs. SchemaService. The SchemaService is the central class providing the functionality of the Parser component. It is initialized with the editor s root data schema. This schema is needed to create self-contained schemata by resolving local references in subschemata against the root schema. The detailed algorithm is described in subsection Schemata that have been self-contained are cached in selfcontainedschemas. Thereby, the map s keys are schema IDs and the values the schemata. The SchemaService s three property-related methods are used to analyze a given schema. Thereby, only the needed information is extracted. This allows requesting properties for single data types defined in the root schema on-demand. The result is tailored to the given data type instead of the editor s complete schema. This provides 56

73 4.7. Parser simple results without a need to traverse them for the needed information. Furthermore, clear separation of the different properties allows testing each function separately. This vastly improves the SchemaService s testability compared to an approach that analyzes the whole schema at once and returns one data structure with all properties. Another advantage of the modular approach for different properties is increased extensibility. If the need for extracting further properties arises, we can simply add another method to the SchemaService without needing to change the existing code. The getcontainmentproperties method returns an array of all ContainmentProperties directly contained in the type defined by the given schema. Correspondingly, the hascontainmentproperties returns whether the type has any ContainmentProperties. A detailed explanation how these properties are collected is given in subsection The getreferenceproperties method returns an array of all ReferenceProperties directly contained in the type defined by the given schema. Therefore, the given schema s links block (see 4.9.1) is analyzed. If it is present, a ReferenceProperty is created for every entry. If an entry fulfills the requirements for ID-based referencing (see 4.9.3), the created ReferenceProperty uses this reference technique. Otherwise path-based referencing is used (see 4.9.4). Property. Property defines the basic attributes of a data type s property. The label is a name describing the Property. The property attribute defines the key that contains the Property s content in an instance. The schema is the JSON Schema defining the data which the Property may contain. A property described by an instance of Property or one of its subclasses is independent of an actual data object containing the property. Therefore, to execute methods based on a Property, the affected data object has to be provided. ContainmentProperty. A ContainmentProperty describes a property that contains an array of non-primitive data objects. Concretely, this means that a schema describing one data object contained in a ContainmentProperty must be of type object. The schema property of an instantiated ContainmentProperty contains the schema describing one data object contained in the containment. For all three methods defined by the ContainmentProperty, the data parameter is the data object which contains the containment defined by the ContainmentProperty. The addtodata method allows to add the given valuetoadd to the containment. The deletefromdata method removes the given valuetodelete from the containment. The getdata method returns all data contained in the containment. ReferenceProperty. A ReferenceProperty describes a property whose value references another data object. Thereby, the data object is not contained in the property but the property only saves a value that allows to resolve this data object. The implemented reference mechanism and its two implementations are described in section 4.9. Similarly to the ContainmentProperty, the data parameter is the data object which contains the 57

74 4. Implementation ReferenceProperty. The getdata method returns a map with the reference values as keys and the resolved data objects as values. The addtodata method adds valuetoadd to the property. In case of path-based referencing (see 4.9.4) this must be a path. For ID-based referencing (see 4.9.3), this is the referenced data object itself. Whether the property uses ID- or path-based referencing is returned by isidbased. The findreferencetargets method returns a map which contains all data objects that can be referenced by this ReferenceProperty. Thereby, the map s keys are the reference values (IDs or paths) and the values the actual data objects Retrieve Containment Properties To retrieve the ContainmentProperties of a given schema, a recursive algorithm is used. The pseudocode describing this algorithm is shown in Listing 4.1. First we describe the parameters of the getcontainment method: key The key that contains the contents of the ContainmentProperty. This is the same as the property of the same name of the Property class. name The name of the ContainmentProperty schema The schema currently analyzed for ContainmentProperties. Because the algorithm works recursively, a differentiation between the current schema and the algorithm s root schema is needed. rootschema The root schema whose ContainmentProperties are retrieved. This is the same schema that a client provides to the public getcontainmentproperties method. Note: This is not the same as the rootschema attribute of the SchemaService. isincontainment A boolean value stating whether the current schema is describing the type of a containment. addfunction A function that allows to add a data object to a containment described by the ContainmentProperty. deletefunction A function that allows to remove a data object from a containment described by the created ContainmentProperty. getfunction A function that allows to get the contents of a containment described by the created ContainmentProperty. internal A boolean value stating whether the current function call analyzes a schema inside the properties definition ot the analyzed root schema. If this is the case, the properties of the current schema are not resolved because we only want ContainmentProperties directly contained in the analyzed root schema. Furthermore, this check is necessary to avoid running into an endless loop in case cascaded properties definitions reference the root schema. 58

75 4.7. Parser 1 ContainmentProperty[] getcontainmentproperties(schema) { 2 return getcontainment( root, root, schema, schema, false, null, null, null, false); 3 } 4 5 ContainmentProperty[] getcontainment(key, name, schema, rootschema, isincontainment, addfunction, deletefunction, getfunction, internal) { 6 if schema.$ref exists 7 resolvedschema = getselfcontainedschema(rootschema, schema.$ref) 8 name = last segment of schema.$ref 9 return getcontainment(key, name, resolvedschema, rootschema, isincontainment, addfunction, deletefunction, getfunction, internal) if schema is of type object 12 if isincontainment is true 13 property = create new ContainmentProperty configured with key, name, and the given functions 14 return [property] 15 if internal is true 16 return [] result = [] 19 for propkey in schema.properties.keys 20 childschema = schema.properties[key] 21 properties = getcontainment(propkey, propkey, childschema, rootschema, false, addfunction, deletefunction, getfunction, true) 22 append properties to result 23 return result if schema is of type array and schema.items is not of type array 26 add = getaddtoarrayfunction(key) 27 delete = getdeletefromarrayfunction(key) 28 get = getarray(key) 29 return getcontainment(key, name, schema.items, rootschema, true, add, delete, get, internal) if schema.anyof exists 32 result = [] 33 for childschema in schema.anyof 34 prop = getcontainment(key, undefined, childschema, rootschema, isincontainment, addfunction, deletefunction, getfunction, internal) 35 append prop to result 36 return result return [] 39 } Listing 4.1: Algorithm to Collect Reference Properties 59

76 4. Implementation We chose the approach of providing the functions to a created ContainmentProperty to achieve increased extensibility (see QA-2). With this approach, when adding an additional schema structure that results in a ContainmentProperty, we only need to add another IF-case to the recursive function and configure the needed functions. Consequently, we would not need to change the ContainmentProperty s implementation. In the following paragraphs we explain how the algorithms works by describing how it handles different types of schemata. If the current schema is a reference, the reference is resolved and self-contained by using the SchemaService s getselfcontainedschema method (line 7). The potential containment property name is set as the last segment of the reference (line 8). Finally, the ContainmentProperties for the resolved schema are retrieved (line 9). If the schema describes an object, it might describe the content of a Containment- Property (line 11). If the current schema is part of a containment, create a new ContainmentProperty configured with the algorithm s parameters (lines 12 to 13). Return the created property encapsulated in an array (line 14). If internal is true but we are not in a containment, we are in a loop. Return an empty array to avoid an endless loop (lines 15 to 16). Otherwise, create a new empty result array and iterate over all properties of the schema. For every property, get the schema describing its content (line 20) and recursively retrieve its ContainmentProperties. Thereby, set internal to true. Append the returned properties to the result array (line 22). Finally, return the result array (line 23). If the schema describes an array and the array s contents are not tuples, the schema might describe a containment (line 25). Therefore, the functions are configured to add, delete, and get data from an array (lines 26 to 28). The ContainmentProperties are recursively resolved for the schema describing the array s items. Thereby, the configured functions are passed as a parameter and isincontainment is set to true (line 29). If the schema contains the anyof keyword, it describes that the possible containment can contain any of its child schemata (line 31). Therefore, the ContainmentProperties are recursively retrieved for each child schemata on its own (lines 33 to 34). The results are merged and returned (lines 32, 35, and 36). If none of the above cases match, schema neither describes nor could contain any ContainmentProperties. For instance, this is the case if schema defines a primitive type (e.g. number or string). In this case, an empty array is returned (line 38). Limitations The algorithm currently cannot handle the JSON Schema keywords oneof, allof, or not. Furthermore, references whose root is not the given rootschema cannot be resolved. However, non-circular remote references should already be resolved when their reference properties are requested (see 4.7.3). Another limitation is that containment arrays whose item type is a tuple, are not supported. 60

77 4.7. Parser 1 void selfcontainschema(schema, outerschema, outerref, includeddefs) { 2 allinnerrefs = find all references in schema 3 for innerref in allinnerrefs 4 resolvedschema = resolve innerref in rootschema 5 if innerref is equal outerref or resolvedschema.id is equal schema.id 6 set innerref to # 7 8 if includeddefs contains innerref 9 continue if resolvedschema.anyof exists 12 for innerschema in resolved.anyof 13 copyandresolveinner(innerschema, innerref, outerschema, outerref, includeddefs) 14 else 15 copyandresolveinner(resolvedschema, innerref, outerschema, outerref, includeddefs) 16 } void copyandresolveinner(resolvedschema, innerref, outerschema, outerref, includeddefs) { 19 definition = deepcopy resolvedschema 20 defname = last path segment in innerref 21 outerschema.definitions[defname] = definition 22 add innerref to includeddefs 23 selfcontainschema(definition, outerschema, outerref, includeddefs) 24 } Listing 4.2: Algorithm to Self Contain a Schema 61

78 4. Implementation Self Contain a Schema When a schema defining a data type contains circular references to other type definitions, they cannot be resolved by the mechanism explained in subsection If the schema is needed on its own, outside of the root schema, these references need to stay resolvable. Therefore, the referenced definitions (which are schemata themselves) must be copied to the schema s definitions block. Thereby, the references in these copied schemata must be resolvable, too. When the getselfcontainedschema method of the SchemaService is called, first the schema is resolved from the given parentschema with the given refpath. If the resulting schema has already been self contained before, it is returned from the SchemaService s selfcontainedschemas map. Otherwise, we need to self-contain the schema. This is done by calling the selfcontainschema method shown as pseudocode in Listing 4.2. First, we introduce the parameters of the selfcontainschema method. schema is the schema whose references we process in the current function call. outerschema is the schema that will be self-contained in the end. We add needed definitions to this schema. outerref is the reference which is considered to be a self-reference in the self-contained outerschema. includeddefs is the list of definitions already added to the outerschema. By default this is initialized with # which means the root of a schema. When we first call selfcontainschema from getselfcontainedschema, we initialize schema and outerschema both with the schema we want to self-contain. outerref is initialized with pathref and includeddefs with the default value. The algorithm works in the following way. First, get all references inside the current schema (line 2). Iterate over all inner references and execute the following steps for every reference. First, get the referenced schema by resolving the innerref against the SchemaService s rootschema (line 4). If the innerref references the outerschema, replace it with the self reference (lines 5 and 6). If the included definitions already include the innerref, we do not need to execute additional steps because we already added it to the outerschema s definitions. We continue with the next reference (lines 8 and 9). Otherwise, we need to replace self-references and add required definitions of the referenced schema before adding it to the outerschema s definitions. In case the referenced schema contains the anyof keyword, all of its child schemata are processed separately (lines 11 to 13). Otherwise, the referenced schema can be processed directly (lines 14 to 15). To process a schema, the copyandresolveinner is called. First, it copies the schema and extracts the schema s name from the innerref that references it (lines 19 and 20). Now, the copied schema is added to the outerschema s definitions block and the innerref added to the included definitions. Finally, the selfcontainschema method is recursively called with the definition as the current schema (line 24). This is necessary to add required definitions referenced in the current definition to the outer schema we want to self-contain. Limitation The self-contain algorithm can only handle local references defined in JSON Pointer format. Thereby, the root of the pointers before the self-containment must 62

79 4.8. Validation be the SchemaService s rootschema. After the self-containing process is finished, the root of the references is the self-contained schema Reference Resolving in Schemata In order to allow to define data using multiple schemata as well as reusing data definitions inside a schema, we support referencing inside and outside a schema. Therefore, we resolve the schema s JSON References 37 and JSON Pointers 38 before setting it as the SchemaService s root schema. To reference another schema, the $ref property is used instead of defining the schema inline. To load a remote schema, a URI using the HTTP protocol can be used. For local references, JSON Pointers are used. Thereby, the path is specified starting from the root of the local schema. To not re-invent the wheel, we use the open-source library json-refs 39 to resolve the references. When a reference is resolved, the target schema is copied and replaces the reference in the in-memory version of the loaded schema. This adheres to the JSON Reference draft and allows simpler processing of the schema later on because resolved references do not need to be loaded again. Limitations Our approach cannot resolve circular references. In case of a circular reference, the reference is not resolved and left in the schema unchanged. For local references, this does not pose any limitation as they can be resolved on-demand when creating containment properties or reference properties, or self-containing a schema. However, this is not the case for unresolved remote references. Therefore, schemata with circular remote references are not supported. This includes references to a remote schema which only has circles inside the remote schema Validation The Validation component is responsible for validating the properties of a data element against the schema defining the data. The component is implemented by the ValidationService class in JSONForms 2. This service was already implemented in JSONForms 2. It is used to fulfill the two functional requirements demanding property validation (FR 8) and the automatic execution thereof (FR 13). To validate the properties and display the errors, the ValidationService is instantiated with the UI Schema defining the form and the JSON Schema defining valid data. Such a UI Schema is always available even if none was registered by the user. This is the case because one is generated in case non is available (see subsection 4.4.2). Whenever the data of the form changes, the validation is executed in the following three steps

80 4. Implementation 1. The data element is validated against the schema. Thereby, the data element s properties are validated against restrictions defined in the schema for corresponding properties. For instance, this validates whether required properties are set, numbers adhere to specified minimum and maximum values, or a string property adheres to a required regular expression. 2. Detected errors are recorded and mapped to the UI Schema elements which represent the erroneous properties. 3. Affected UI Schema elements are notified about their errors in order to display a suitable error message at the control rendering the affected property References Defining references between data objects is an important part of creating data. This is the case because data rarely exists in a vacuum. Instead, it often is distributed over multiple models. A simple example for this is having a model containing students and one containing lectures. We want to associate every student with the lectures he is interested in. To achieve this, for every student we reference these lectures. Without references, we would need to copy every lecture definition to every student that is interested in it. This would lead to a lot of code duplication. And even if the the student and lecture data was part of a single model, we still needed references to avoid copying lecture data to multiple students interested in the same lecture. Consequently, references to other data inside and outside a model are an integral part of defining data models. When defining a reference mechanism for the editor framework, we identified the following aspects we must address. (1) How to specify valid reference targets? We define the valid reference targets in the JSON Schema which describes the data modeled in the editor. Therefore, we introduce a new property for JSON Schemata in subsection This definition is resolved depending on the used reference technique. (2) How to serialize and resolve a reference? For this, mainly two possibilities exist: Saving the reference as a unique ID that identifies the referenced data or saving a path to the referenced data object. We explore both possibilities in subsections and (3) How to guarantee that a user may only reference valid targets? This is solved by providing renderers that limit the selection to valid reference targets. For instance by presenting them in a combo box. (4) How to validate that an existing reference resolves to a valid target? As this depends on the implemented reference technique, this is explained in the corresponding subsections and

81 4.9. References Links We use a links definition block to define references in a JSON Schema. This links block is specified based on the links property and the Link Description Objects of the JSON Hyper-Schema internet draft 40. The links block consists of an arbitrary number of Link Definition Objects. For the Link Definition Objects we use two properties defined in the Hyper-Schema: href and targetschema. href The href property contains a URI template. We limit it to the following format: <path-to-ref-targets>{<reference-property>}. <reference-property> is the name of the property which contains the serialized reference value. <path-to-ref-targets> describes how to find the root data that contains possible reference targets. Currently, we support two kinds of locations: Local data in the current model and data registered in a resource set. In the first case, the path can be empty to select the whole current model including the root itself or start with a #. To reference data contained in a registered resource we defined a new protocol. It follows the following format: rs://<resource-name>/<local-path-in-resource>. Thereby, <resource-name> is the name under which the resource was registered to the editor. <local-path-in-resource> is an optional JSON Pointer that defines the reference target root inside the resource. If <local-path-in-resource> is empty, the whole resource itself is the reference target root. targetschema The targetschema property contains a JSON Schema that restricts which data objects found in the reference target root are valid reference targets. Additionally to specifying the schema inline or referencing it with a JSON Reference, we allow to load it from a registered resource, too. To accomplish this, we introduced the resource property. If the targetschema contains this property, the property s value is interpreted as a resource name. The resource is loaded and used in place of the targetschema. Example In order to illustrate the usage of the specification above, Listing 4.3 shows a JSON Schema with two reference definitions. Lines 3 to 6 show the property definitions for the object described by the schema. Lines 7 to 18 shows a links block with two reference definitions. The href property in line 9 defines two things. First, the reference targets are contained in the root of the instance s current data. Second, the reference s serialization is stored in the localref property defined in line 4. The targetschema property in line 10 contains an inline schema definition. It states that valid reference targets must be of type string. The second reference definition uses resources. The href property in line 13 defines that the reference targets are contained in a registered resource of name data. This includes the root data object itself. The serialized reference value is stored in the resourceref

82 4. Implementation 1 { 2 "type": "object", 3 "properties": { 4 "localref": { "type": "string" }, 5 "resourceref": { "type": "string" } 6 }, 7 "links": [ 8 { 9 "href": "#/{localref}", 10 "targetschema": { "type": "string" } 11 }, 12 { 13 "href": "rs://data/{resourceref}", 14 "targetschema": { 15 "resource": "personschema" 16 } 17 } 18 ] 19 } Listing 4.3: JSON Schema with Example Declaration of a Links Block with Two Reference Definitions Figure 4.6.: UML Class Diagram: Resource Set Interface property defined in line 5. The targetschema references a schema in the registered resource of name personschema by using our newly introduced resource keyword. Limitations The first limitation of the current implementation is the possible reference targets. Currently, only local and resource-based referencing is possible. Another common resource location are web-based targets specified via the HTTP protocol. This would allow loading static resources as well as requesting data from a web service by using the reference property as a parameter. The second limitation lies in the possible target Schemata. They must be fully resolvable. This means, they cannot contain any circular references Resource Set To allow registering resources to the editor, we implemented a resource set mechanism in JSON Forms. Thereby, the JsonForms class contains one resource set as a static 66

83 4.9. References variable. A resource can be any data object. The resource set s interface is shown in Figure 4.6. It provides methods to check whether a registered resource exists, to get a resource by providing its name, and to register a resource for a given name. By setting the resolvereferences parameter of the registerresource method to true, the ResourceSet is instructed to resolve JSON references inside the resource. Our implementation of the ResourceSet simply stores the registered resources in a map that maps from a resource s name to its content. Future implementations could be extended by a mechanism to register resources by providing a URI instead of the data itself. This would allow to access a multitude of different data sources using a unified interface. Furthermore, this could be used to load the data on demand. In case of big data objects, this decreases the editor s memory consumption ID-based References The first of our two implemented referencing techniques is ID-based referencing. The main idea is to identify a referenced data object by a unique id. This ID is stored in the reference property and allows to later resolve the referenced data object. Using IDs has two main challenges. First, we need to know which property of potential reference targets contains the id. For this, two possible solutions exist. Either providing the identifying property in a reference definition alongside the href and targetschema properties or configuring it externally. We decided on the latter for the following reasons. First, the links standard introduced in subsection does not define such a property. Second, to guarantee unique ids, often a property independent of the rest of the data object is used. This property typically contains a generated id. Third, using a globally configured identifying property simplifies generating unique ids for data objects created in the editor. The second challenge is that a reference target must contain a unique ID to allow referencing it. One aspect of this is that JSON objects do not have any type information. Type information is only available by associating them with a schema. To ensure that reference targets have an ID, the targetschema of an ID-based reference must define the configured identifying property as one of its properties. In order to reliably get unique IDs for data objects created in the editor, we provide an ID generation mechanism. When a global identifying property is configured, the editor generates a new ID for every created data object. To get unique IDs (with near certainty) we generate universally unique identifiers (UUIDs). Consequently, when referencing data created in the editor, the presence of unique IDs does not pose a problem. To resolve a reference serialized as an ID, two steps are executed. First, all possible reference targets are retrieved by resolving all data objects at the target location specified in the href property (see subsection 4.9.1). Second, the data object containing the serialized ID in the identifying property is the reference target. Validating whether a reference is valid can be determined in the same way. First, try to resolve the reference. If no data object is returned, no valid reference target with the provided ID exists and the reference is invalid. Otherwise, it is valid. 67

84 4. Implementation Limitations Our prototypical implementation of ID-based referencing has the following limitations. As indicated above, data objects need to contain a unique ID to allow referencing them. On-the-fly generation of IDs for otherwise valid targets is not provided. However, this could be introduced retroactively. The second limitation is related to the search for possible reference targets. The current implementation only evaluates targets at the exact location specified in the href property. No recursive search of elements deeper in the containment hierarchy is done. Finally, a targetschema used to define an ID-based reference must adhere to the following limitations. It must have an id identifying the type represented by the schema. It must not use any of the following JSON Schema keywords: anyof, allof, oneof, not. To use ID-based referencing, a custom renderer must be provided to render the reference property. We implemented an abstract base control providing the reference functionality. When implementing a concrete renderer based on the abstract base, only one method needs to be overwritten which returns the name of a label property for reference targets offered by the control. The label property s content of a reference target is used as its name when displaying it in the control Path-based References The second reference technique we implemented is path-based referencing. The idea is to reference a data object by storing its location as a path relative to the root of the reference targets. This path is stored in the reference property. To resolve a reference target, two steps are executed. First, the root of the reference targets is resolved by resolving the URI given in the reference configuration s href property like described in subsection In the second step, the stored path is resolved against this root data. If the path is resolvable, the referenced data is returned. The main challenge for this approach is collecting all possible reference targets. The result is collected as a map. The result map contains the reference paths as keys and the corresponding target data as values. To do this, we first resolve the reference target root. Starting from there, we recursively search all contained properties and array entries. The recursive search algorithm is shown as pseudocode in Listing 4.4. We record the current path relative to the target root in every step. To determine if the currently analyzed data object is a valid reference target, we validate it against the targetschema. If the validation succeeds, a new entry is added to the result map. Afterwards, we check if the current data is an array or a map. In both cases, we iterate over the children and recursively call the search algorithm for them. The results of these calls are merged into the result map. Finally, the result map is returned. To validate whether a reference property contains a valid path, the map of all reference targets can be gathered. If this map contains the reference path as a key, a valid target is referenced. Limitations One important drawback of path-based referencing is that references are not robust when the reference goes over an array. Because the reference path must 68

85 4.10. Customization 1 Map<string, any> collectionhelpermap(currentpath, data, targetschema) { 2 result = new Map<string, any>() 3 if data validates against targetschema 4 result[currentpath] = data 5 6 if data is an array 7 for (i = 0; i <= data.length; i++) 8 childpath = currentpath is empty? key : currentpath + / + i 9 childresult = collectionhelpermap(childpath, data[i], targetschema) 10 result = merge childresult into result 11 else if data is a Map and not empty 12 for key in data.keys 13 childpath = currentpath is empty? key : currentpath + / + key 14 childresult = collectionhelpermap(childpath, data[key], targetschema) 15 result = merge childresult into result return result 18 } Listing 4.4: Algorithm to Collect Reference Targets from a Given Root Data contain a specific index to reference the content of an array, the resolution might deliver an unexpected result when the sorting of the array changes. This is the case because the data object at the referenced index might be swapped with another one. As long as the new data object is a valid reference target according to the targetschema, this change cannot be detected. The second limitation applies to the reference target root. It should not contain any unresolved JSON References because the current implementation of the search algorithm (see Listing 4.4) does not resolve them. If the target data is contained in a resource, non-circular references can be resolved when adding the data to the resource set (see subsection 4.9.2). To use path-based referencing, a UI Schema must be provided for the type of the data object containing the reference property. Thereby, a ControlElement must be configured for the reference property. To mark the property to be rendered as a pathbased reference, an option must be added to the ControlElement. The key of the option is reference-control and the value path. This allows our path renderer to recognize the property as a reference. 69

86 4. Implementation Configuration & Customizations Editor Generation Framework Services EditorRenderer JSON Schema Parser Image Mapping Label Mapping TreeRenderer Model Mapping UI Schemata DetailRendering Legend: configures uses Figure 4.7.: Overview over the Editor Framework s Configurations and Customizations 70

87 4.10. Customization Customization In this section, we describe the customization options of the editor framework in detail. An overview over the relationship between the editor framework s customizations and configurations is shown in Figure 4.7. As already discussed in previous sections, the JSON Schema configures the data editable in the editor and is analyzed by the Parser (see section 4.7). Likewise, services customize the editor s behavior by accessing the editor renderer (see section 4.3). As these configurations have already been discussed in detail, we will not elaborate on them any further in this section. The image mapping, label mapping, and model mapping customize the containment tree created by the TreeRenderer component (see section 4.5). Multiple UI Schemata are used to configure the rendering of data elements which are selected in the containment tree. Thereby, the form for every data type is defined in a separate UI Schema. In order to explain the configuration options and concretely show how an editor can be configured with them, we define a small sample schema in Listing 4.5. Based on this, we define suitable example mappings and UI Schemata in the following sections in addition to their general descriptions Image Mapping The image mapping allows to define a CSS class for every type definition in the editor s JSON Schema. The CSS class of a type is used by the TreeRenderer to display an icon for the type s instances in the element containment tree. Therefore, the icons have to be linked in the configured CSS classes. Thereby, the type is specified as the id of the (sub-)schema defining it. An image mapping for the example schema in Listing 4.5 is defined in lines 1 to 5 of Listing 4.6. The listing associates a CSS class to each of the schema s data types in lines 2 to 4. In lines 8 to 10, we specify the corresponding CSS classes and load a local image in each one. As these classes will be used by the TreeRenderer, the loaded icons are shown for instances of the types. 1 { 2 "#mammal": "mammal", 3 "#insect": "insect", 4 "#petperson": "person" 5 } 6 7 // CSS 8.mammal { background image: url(./mammal.gif ); } 9.insect { background image: url(./insect.gif ); } 10.person { background image: url(./person.gif ); } Listing 4.6: Sample Image Mapping 71

88 4. Implementation 1 { 2 "definitions": { 3 "mammal": { 4 "id": "#mammal", 5 "properties": { 6 "name": { "type": "string" }, 7 "species": { "type": "string", "enum": ["cat", "dog"] }, 8 "sex": { "type": "string", "enum": ["male", "female"] }, 9 "pregnant": { "type": "boolean", "default": false } 10 } 11 }, 12 "insect": { 13 "id": "#insect", 14 "properties": { 15 "name": { "type": "string" }, 16 "species": { "type": "string", "enum": ["praying mantis", "spider"]} 17 } 18 } 19 }, 20 "id": "#petperson", 21 "properties": { 22 "fullname": { "type": "string" }, 23 "pets": { 24 "type": "object", 25 "items": { 26 "anyof": [ 27 { "$ref": "#/definitions/mammal" }, 28 { "$ref": "#/definitions/insect" } 29 ] 30 } 31 } 32 } 33 } Listing 4.5: Customization Example JSON Schema 72

89 4.10. Customization Label Mapping The label mapping allows to define an eponymous property for every type definition in the editor s JSON Schema. The content of this property is used by the TreeRenderer to determine the label shown for a data element in the element containment tree. Thereby, the type is specified as the id of the (sub-)schema defining it. Listing 4.7 configures a label mapping for the JSON Schema shown in Listing 4.5. For mammals and insects, the content of their name property will be used as their labels in the containment tree. For pet-persons, the fullname property is used. 1 { 2 "#mammal": "name", 3 "#insect": "name", 4 "#petperson": "fullname" 5 } Listing 4.7: Sample Label Mapping Improvements. Future improvements of this mapping could allow to specify a function that assembles a data element s label. This would allow to use more than one property, define static parts of the label, and define default labels in case the required properties are not set Model Mapping The model mapping is used to determine the type of a data element that is part of a containment property (see 4.7.1) which can contain multiple different types by using JSON Schema s anyof keyword. This is necessary because of two characteristics of JSON coming together when using anyof to allow multiple types in a property. First, JSON data is not typed in itself. Consequently, the associated type definition can not be inferred unambiguously from a JSON data object. Normally, this is not a problem for the editor because it can infer the type based on the property that the data is contained in. However, when this fact is combined with allowing multiple types in a property, the type identification is not possible anymore. As a consequence, we need a mechanism to infer the type from the data through other means. We use the model mapping that infers the type based on a property in the data. The model mapping specifies the name of a property that contains a value identifying the type of a data object. Additionally, it contains a mapping that maps from values contained in this property to the type defining the data. Thereby, the type is specified as the id of the (sub-)schema defining it. Listing 4.8 configures a model mapping for the sample schema shown in Listing 4.5. We configure the identifying property as species in line 2. In lines 3 and 4 we specify that an object whose species property has the value cat or dog is of type with id #mammal. Correspondingly, we associate the values praying mantins and spider with the type that has the id #insect. 73

90 4. Implementation 1 { 2 "attribute": "species", 3 "mapping": { 4 "cat": "#mammal", 5 "dog": "#mammal", 6 "praying mantis": "#insect", 7 "spider": "#insect" 8 } 9 } Listing 4.8: Sample Model Mapping Improvements. Further improvements of this mapping should address two of its weaknesses. First, the property used to gather the identifying data is the same for all mappings. This is a problem for JSON Schemata containing definitions whose affected types do not share a property of the same name. Second, the keys of the mapping are hard-coded values at the moment. More flexibility can be achieved by using a predicate function which is evaluated on the checked data. The function then returns whether the data is an instance of the mapped type. A more advanced approach could use a function returning a numerical value. The type is then associated with the highest number. This would allow to chose the most likely type in case it can not be determined unambiguously. However, using functions would most likely lead to an increased effort when defining the mapping UI Schemata A UI Schema is used by JSON Forms to render a form for a data object of a type specified by a data schema. In our editor framework, one UI Schema can be registered for every data type whose instances can be created in the element containment tree. If no UI Schema is registered for a data type, a default one is generated when an instance of the type is rendered (see subsection 4.4.2). Thereby, a UI Schema is a view model which defines how the data is rendered as a form. The UI schema specifies which properties are rendered, which layouts are used for this, and under which circumstances the properties are editable or shown. A UI Schema is built as a containment hierarchy with exactly one UISchemaElement as its root Elements of a UI Schema In this section, we introduce all elements that are used to define UI Schemata. Figure 4.8 shows the inheritance hierarchy of the elements as well as their properties. Figure 4.9 shows all layouts and Figure 4.10 shows all elements needed to configure different rules. Thereby, in JSON Form s implementation, all these elements are defined as interfaces. In favor of better readability, we omitted the stereotypes in the UML diagram. The UISchemaElement s property type fulfills a special role. It defines which type an 74

91 4.10. Customization Figure 4.8.: UML Class Diagram: UI Schema Element Hierarchy instance of UISchemaElement or one of its children has. Thereby, every type that is a direct or indirect child of UISchemaElement requires the type property to be set to a fixed string identifying it. For instance, the ControlElement type defines that all of its instances type properties are set to Control. For the GroupLayout it is set to Group and for LabelElement to Label. All other element types use their interface name. In the following, we introduce all elements and explain their purpose. UISchemaElement The UISchemaElement is the base of the UI Schema. A UI Schema consists of exactly one UISchemaElement at the root level. It defines the basic properties that every element UI Schema must provide. The options property can contain any kind of data. This allows to annotate a UISchemaElement with additional information that might be needed by a renderer. The runtime property is a reference to a context object which encapsulates an element s runtime state, e.g. whether it is visible or is associated with any validation errors. Additionally, every element may configure one Rule (see below). 75

92 4. Implementation Figure 4.9.: UML Class Diagram: UI Schema Layouts Hierarchy Figure 4.10.: UML Class Diagram: UI Schema Rule Hierarchy 76

93 4.10. Customization ControlElement A ControlElement configures one property that is rendered in a form. In addition to the UISchemaElement, the ControlElement extends Scopable. The $ref property of the inherited scope property contains the path to the rendered property in the data schema. Thereby, this path is specified as a JSON Pointer relative to the root of the referenced data schema. The label property s type Label is a placeholder representing a choice between the types boolean, string, and a label object. In case of boolean, the label property defines whether a label is shown. In case of a string, a label with the property s content is shown. In the label object, both options can be specified: a text and whether it is displayed. MasterDetailLayout The MasterDetailLayout configures a tree master detail layout for the data referenced by its scope. Thereby, the scope and label properties work the same as for the ControlElement. A tree master detail layout consists of a dynamic containment tree and a detail view showing the properties of the element currently selected in the tree. The tree s elements represent the containment hierarchy of the rendered data. It is rendered by the TreeRenderer component (see section 4.5). LabelElement A LabelElement renders a label with the text specified in its text property. Layouts A Layout contains one or more UISchemaElements in their elements property. A layout renders all of its child elements in an arrangement depending on its type. There are three universal layouts and the Category which is only used in the context of Categorizations (see below). The HorizontalLayout renders all contained elements side by side. The VerticalLayout and the GroupLayout render contained elements one beneath each other. Additionally, the GroupLayout shows an optional label and highlights the elements grouping, for instance by enclosing them with a border. Categorization Similarly to the TreeMasterLayout, a Categorization also renders a master and a detail view. In contrast to the TreeMasterLayout, the Categorization s master view does not contain a tree of data elements. Instead, it contains a tree of the Categorizations and Categories of its elements property. A Category is a Layout with a label. The label is the name of the category displayed in the containing categorization s tree. The contained elements of the category are rendered in the categorization s detail view when the category is selected. In conclusion, a Categorization behaves similar to a 77

94 4. Implementation tree master detail but uses a pre-configured, static tree structure instead of a dynamic one defined by the rendered data. Rules Generally speaking, a Rule triggers a configured effect on its parent element when the rule s condition evaluates to true. The possible effects are defined in the RuleEffect enumeration. For instance, a rule with effect HIDE makes its target invisible when its condition is true. Currently there is only one type of Condition: the LeafCondition. It references a property of the rendered data with its scope property. This works the same as for ControlElements. When the property s content matches the value configured in expectedvalue, the rule evaluates to true. At the moment, only simple conditions can be configured because a Rule may only contain one Condition and the LeafCondition only allows to reference one property. This heavily limits the configuration possibilities. One way to deal with this, would be the introduction of conditions that contain further conditions and aggregate their evaluation results. For instance, an AndCondition would connect the evaluation results of its children with a logical AND. Congruently, an OrCondition would do the same with a logical OR. As these conditions could be cascaded, arbitrarily complex conditions could be created Example UI Schema Listing 4.9 shows an example for a UI Schema for the type with id #mammal defined in lines 3 to 11 of Listing 4.5. In line 2, we define the root UISchemaElement as a VerticalLayout. This contains four Controls specifying the properties to render. In line 14, we explicitly define to show the label Gender instead of automatically inferring it from the property s name. In lines 20 to 26, we configure a rule for the pregnant property. We only render the property, if the sex property of the mammal is set to female. This is done by specifying in line 21 that the Control is shown when the condition is met. In line 23, we define that the condition evaluates to true if the target property has the value female. In line 24, we reference the pregnant property as the target property to evaluate Resources Resources can be used to register data to the editor that is not part of the data edited in the editor. Thereby, a resource can be any data provided as a JavaScript Object. A resource is registered under a unique name. This name can then be used to reference data in the resource or use a resource as a target schema for a reference definition. For more details regarding references and the resource set holding registered resources see section

95 4.10. Customization 1 { 2 "type": "VerticalLayout", 3 "elements": [ 4 { 5 "type": "Control", 6 "scope": { "$ref": "#/properties/name" } 7 }, 8 { 9 "type": "Control", 10 "scope": { "$ref": "#/properties/species" } 11 }, 12 { 13 "type": "Control", 14 "label": "Gender", 15 "scope": { "$ref": "#/properties/sex" } 16 }, 17 { 18 "type": "Control", 19 "scope": { "$ref": "#/properties/pregnant" }, 20 "rule": { 21 "effect": "SHOW", 22 "condition": { 23 "expectedvalue": "female", 24 "scope": { "$ref": "#/properties/pregnant" } 25 } 26 } 27 } 28 ] 29 } Listing 4.9: Sample UI Schema for the Mammal Type of Listing

96 4. Implementation Figure 4.11.: UML Class Diagram: Editor Configuration Object Configuration Object The configuration object contains all configurations necessary to specify an editor. It can be set as a single object to the editor. In principle, this allows to read the whole configuration from one JSON file. A configuration object is defined by the EditorConfiguration type shown in Figure When defining such a configuration object, only the dataschema attribute is mandatory. The attributes are defined as described in the foregoing subsections Testing To make sure a developed system works as intended, extensive testing is needed. Besides testing generated editors by hand, we implemented automated unit tests. These allow automatic testing of sub-parts of the system. This allows for easy regression testing when the system is changed. As unit tests often aim to test a single functionality of a system, they are especially suitable to test algorithms like the retrieving of containment properties or the resolution of references. To provide unit tests for the system, we use the open-source framework AVA 41. AVA allows to run unit tests fully automatic with serial and parallel execution. Furthermore, it allows testing asynchronous functions and callbacks. It provides functionality to configure common test behavior for multiple tests, provides various assertion tools, and is well documented

97 5. Evaluation In this chapter, we evaluate our implementation of the model-driven JSON editor framework. Therefore, we chose three languages for which we each generate an editor with our framework. These languages are introduced in section 5.1. We describe the process of creating the editors in section 5.2. Based on this, we collect the advantages and limitations of using the editor framework in section 5.3 To judge the usability of editors generated with our framework, we conduct a usability test in cooperation with our industry partner in section 5.4. Finally, we conduct an anecdotal comparison of an editor generated with our framework to a specific implementation in section Evaluation Languages In this section, we describe the languages for which we each customized and generated an editor Ecore Ecore 42 is the meta-model of the Eclipse Modeling Framework 43. Ecore is a language that allows specifying structured data through the definition of classes, attributes references, enumerations, custom data types, and more. Thereby, every class can have an arbitrary number of supertypes and every element can be annotated by an annotation object. A specification defined in the Ecore language can then be used to generate the corresponding code with the Eclipse Modeling Framework Json Schema Json Schema 44 is used to define JSON data. It describes which properties a JSON object may contain and of which types these objects can be. Thereby, the type can be a primitive type or another JSON Schema. In general, every time a new type is defined in JSON Schema, this type definition is a JSON Schema itself. Consequently, JSON Schema is highly recursive in itself. JSON Schema provides various validation keywords which further define the legal contents of an object. However, the contents of a valid JSON Schema are highly variable package-summary.html

98 5. Evaluation As a result, only a few of the available properties of a schema are used at the same time. Many are even mutually exclusive UI Schema UI Schemata are used to define how data objects are rendered in JSONForms and in our editor framework. A detailed description of the language s elements is given in subsection Editor Customization Process In this section, we describe the customizations that we conducted for every one of our evaluation languages. Thereby, we focused on the Ecore and the UI Schema editor due to the JSON Schema editor s full configuration being out of the scope of this thesis. However, JSON Schema s complexity gives us valuable insights into the limitations of our framework General For all editors, we create a basic index web page which loads the editor framework and the provided libraries and style sheets. Subsequently, we create the new editor element. Furthermore, we configure the buttons in the index page to allow loading, exporting, and downloading the editor s data. Thereby, we do not need to implement these services ourselves but simply load them from the editor framework Ecore To configure the Ecore Editor we conducted the following steps: 1. Add unique IDs to type definitions missing them. This is necessary to allow configuring labels, images and the model mapping for these types. 2. Create a label mapping containing an entry for every type 3. Create the model mapping with attribute eclass : Every element type that is part of a containment property whose schema uses anyof needs to be mapped. These are the types with the following types: EEnum, EClass, EDataType, EReference, EAttribute. 4. Create an image mapping with an entry for every type. Link a unique CSS class for each type. 5. Create a CSS file that loads the corresponding icon for every type and load it in the index web page. 82

99 5.2. Editor Customization Process 6. Configure link definitions for the etype properties of the types EReference and EAttribute to allow referencing their type. 7. Implement two custom reference controls for ID-based referencing for the references configured in the previous step. For EReference, only a simple method needs to be overwritten. For EAttribute, the implementation needs more effort. Ecore s basic attribute types (e.g. EInt) need to be shown when selecting an attribute type. Furthermore, not all attribute type options can be shown because attribute types can be EEnums or EDataTypes and ID-based referencing does not support anyof in their target schema. 8. Create a UI Schema for every type. This is necessary to hide the _id an eclass properties which are not meant for the user to be seen or edited. Furthermore, for the EAttribute and EReference types, we need to declare that their etype property should be rendered as a reference. 9. Create a configuration object containing the previously created configurations and load it with the editor framework. Figure 5.1 shows the generated Ecore editor containing a simple model UI Schema To configure the UI Schema Editor we conducted the following steps: 1. Add unique IDs to type definitions missing them. This is necessary to allow configuring labels, images and the model mapping for these types. 2. Create a label mapping containing an entry for every type 3. Create the model mapping with attribute type : Every element type of the UI Schema is identified by its type property (see ). 4. Create an image mapping with an entry for every type. Link a unique CSS class for each type. 5. Create a CSS file that loads the corresponding icon for every type and load it in the index web page. 6. Extend the export and download services to filter out empty rule objects before serializing the editor s data. This is needed because the editor creates an empty data object when rendering a Rule s properties as part of a detail view of a type containing a Rule. 7. Configure a link definition for every used $ref property of the Scope type in the JSON Schema. Thereby, configure the reference targets to be in a resource named dataschema. Configure the target schema to be loaded from a resource. 83

100 5. Evaluation Figure 5.1.: Screenshot Showing the Created Ecore Editor 84

101 5.2. Editor Customization Process Figure 5.2.: Screenshot Showing the Created UI Schema Editor Use a modified version of JSON Schema Draft 4 45 that does not allow additional properties. 8. Create a UI Schema for every type. This is necessary for two reasons. First, we need to declare for the $ref properties that they should be rendered as a reference. Second, without UI Schemata the forms of most types are confusing: Some properties of the rule objects contained in every type use the same names as their parents. 9. Create a configuration object containing the previously created configurations and load it with the editor framework. Figure 5.2 shows the generated UI Schema editor containing a simple model JSON Schema To configure the UI Schema Editor we conducted the following steps: 1. Remove the usage of the allof keyword from the schema. 2. Replace the schema s id as it is not the original JSON Schema 04 anymore. 3. Removed the type property s possibility to be either an array or a single value. Now it can only be a single value. 45 Available at: 85

A Model-Driven JSON Editor

A Model-Driven JSON Editor A Model-Driven JSON Editor Lucas Köhler Master s Thesis Kickoff, 10.07.2017, Munich Advisors: Adrian Hernandez-Mendez, Dr. Jonas Helming Chair of Software Engineering for Business Information Systems (sebis)

More information

Language engineering and Domain Specific Languages

Language engineering and Domain Specific Languages Language engineering and Domain Specific Languages Perdita Stevens School of Informatics University of Edinburgh Plan 1. Defining languages 2. General purpose languages vs domain specific languages 3.

More information

Introduction to MDE and Model Transformation

Introduction to MDE and Model Transformation Vlad Acretoaie Department of Applied Mathematics and Computer Science Technical University of Denmark rvac@dtu.dk DTU Course 02291 System Integration Vlad Acretoaie Department of Applied Mathematics and

More information

BIG MODELS AN ALTERNATIVE APPROACH

BIG MODELS AN ALTERNATIVE APPROACH 2. BIG MODELS AN ALTERNATIVE APPROACH Whitepaper Eclipse Summit 2008 Modeling Symposium Jos Warmer, Ordina (jos.warmer@ordina.nl) Abstract Scaling up modeling within project runs into many practical problems.

More information

Plan. Language engineering and Domain Specific Languages. Language designer defines syntax. How to define language

Plan. Language engineering and Domain Specific Languages. Language designer defines syntax. How to define language Plan Language engineering and Domain Specific Languages Perdita Stevens School of Informatics University of Edinburgh 1. Defining languages 2. General purpose languages vs domain specific languages 3.

More information

Model-Independent Differences

Model-Independent Differences Model-Independent Differences Patrick Könemann Technical University of Denmark, Informatics and Mathematical Modelling Richard Petersens Plads, DK-2800 Kgs. Lyngby, Denmark pk@imm.dtu.dk Abstract Computing

More information

BLU AGE 2009 Edition Agile Model Transformation

BLU AGE 2009 Edition Agile Model Transformation BLU AGE 2009 Edition Agile Model Transformation Model Driven Modernization for Legacy Systems 1 2009 NETFECTIVE TECHNOLOGY -ne peut être copiésans BLU AGE Agile Model Transformation Agenda Model transformation

More information

Spemmet - A Tool for Modeling Software Processes with SPEM

Spemmet - A Tool for Modeling Software Processes with SPEM Spemmet - A Tool for Modeling Software Processes with SPEM Tuomas Mäkilä tuomas.makila@it.utu.fi Antero Järvi antero.jarvi@it.utu.fi Abstract: The software development process has many unique attributes

More information

The Eclipse Modeling Framework and MDA Status and Opportunities

The Eclipse Modeling Framework and MDA Status and Opportunities The Eclipse Modeling Framework and MDA Status and Opportunities David Frankel Consulting df@davidfrankelconsulting.com www.davidfrankelconsulting.com Portions adapted from the book Model Driven Architecture:

More information

Computation Independent Model (CIM): Platform Independent Model (PIM): Platform Specific Model (PSM): Implementation Specific Model (ISM):

Computation Independent Model (CIM): Platform Independent Model (PIM): Platform Specific Model (PSM): Implementation Specific Model (ISM): viii Preface The software industry has evolved to tackle new approaches aligned with the Internet, object-orientation, distributed components and new platforms. However, the majority of the large information

More information

MEMOCenterNG A full-featured modeling environment for organization modeling and model-driven software development

MEMOCenterNG A full-featured modeling environment for organization modeling and model-driven software development MEMOCenterNG A full-featured modeling environment for organization modeling and model-driven software development Jens Gulden and Prof. Dr. Ulrich Frank University Duisburg-Essen, Universitaetsstr. 9,

More information

Developing Web-Based Applications Using Model Driven Architecture and Domain Specific Languages

Developing Web-Based Applications Using Model Driven Architecture and Domain Specific Languages Proceedings of the 8 th International Conference on Applied Informatics Eger, Hungary, January 27 30, 2010. Vol. 2. pp. 287 293. Developing Web-Based Applications Using Model Driven Architecture and Domain

More information

Teiid Designer User Guide 7.5.0

Teiid Designer User Guide 7.5.0 Teiid Designer User Guide 1 7.5.0 1. Introduction... 1 1.1. What is Teiid Designer?... 1 1.2. Why Use Teiid Designer?... 2 1.3. Metadata Overview... 2 1.3.1. What is Metadata... 2 1.3.2. Editing Metadata

More information

Model driven Engineering & Model driven Architecture

Model driven Engineering & Model driven Architecture Model driven Engineering & Model driven Architecture Prof. Dr. Mark van den Brand Software Engineering and Technology Faculteit Wiskunde en Informatica Technische Universiteit Eindhoven Model driven software

More information

Ontology-based Architecture Documentation Approach

Ontology-based Architecture Documentation Approach 4 Ontology-based Architecture Documentation Approach In this chapter we investigate how an ontology can be used for retrieving AK from SA documentation (RQ2). We first give background information on the

More information

ECLIPSE MODELING PROJECT

ECLIPSE MODELING PROJECT ECLIPSE MODELING PROJECT A Domain-Specific Language Toolkit Richard С. Gronback AAddison-Wesley Upper Saddle River, NJ Boston Indianapolis San Francisco New York Toronto Montreal London Munich Pans Madrid

More information

Model Driven Engineering (MDE)

Model Driven Engineering (MDE) Model Driven Engineering (MDE) Yngve Lamo 1 1 Faculty of Engineering, Bergen University College, Norway 26 April 2011 Ålesund Outline Background Software Engineering History, SE Model Driven Engineering

More information

UML PROFILING AND DSL

UML PROFILING AND DSL UML PROFILING AND DSL version 17.0.1 user guide No Magic, Inc. 2011 All material contained herein is considered proprietary information owned by No Magic, Inc. and is not to be shared, copied, or reproduced

More information

Domain-Specific Languages Language Workbenches

Domain-Specific Languages Language Workbenches Software Engineering with and Domain-Specific Languages Language Workbenches Peter Friese Itemis peter.friese@itemis.de Markus Voelter Independent/itemis voelter@acm.org 1 Programming Languages C# Erlang

More information

WHY WE NEED AN XML STANDARD FOR REPRESENTING BUSINESS RULES. Introduction. Production rules. Christian de Sainte Marie ILOG

WHY WE NEED AN XML STANDARD FOR REPRESENTING BUSINESS RULES. Introduction. Production rules. Christian de Sainte Marie ILOG WHY WE NEED AN XML STANDARD FOR REPRESENTING BUSINESS RULES Christian de Sainte Marie ILOG Introduction We are interested in the topic of communicating policy decisions to other parties, and, more generally,

More information

Towards an EA View Template Marketplace

Towards an EA View Template Marketplace Towards an EA View Template Marketplace 29.06.2016, Prof. Dr. Florian Matthes Software Engineering für betriebliche Informationssysteme (sebis) Fakultät für Informatik Technische Universität München wwwmatthes.in.tum.de

More information

Sequence Diagram Generation with Model Transformation Technology

Sequence Diagram Generation with Model Transformation Technology , March 12-14, 2014, Hong Kong Sequence Diagram Generation with Model Transformation Technology Photchana Sawprakhon, Yachai Limpiyakorn Abstract Creating Sequence diagrams with UML tools can be incomplete,

More information

Variability Implementation Techniques for Platforms and Services (Interim)

Variability Implementation Techniques for Platforms and Services (Interim) Engineering Virtual Domain-Specific Service Platforms Specific Targeted Research Project: FP7-ICT-2009-5 / 257483 Variability Implementation Techniques for Platforms and Services (Interim) Abstract Creating

More information

Start Up Benoît Langlois / Thales Global Services Eclipse (EMFT) EGF 2011 by Thales; made available under the EPL v1.

Start Up Benoît Langlois / Thales Global Services Eclipse (EMFT) EGF 2011 by Thales; made available under the EPL v1. www.thalesgroup.com Start Up Benoît Langlois / Thales Global Services 2 / Introduction EGF Architecture Concepts & Practice EGF Portfolios 3 / Introduction EGF Architecture Concepts & Practice EGF Portfolios

More information

Defining Domain-Specific Modeling Languages

Defining Domain-Specific Modeling Languages Defining Domain-Specific Modeling Languages 1 st Oct 2008 Juha-Pekka Tolvanen MetaCase 1 Relevant language classifications to start with General-Purpose / Domain-Specific Narrow area of interest Often

More information

Dresden OCL2 in MOFLON

Dresden OCL2 in MOFLON Dresden OCL2 in MOFLON 10 Jahre Dresden-OCL Workshop Felix Klar Felix.Klar@es.tu-darmstadt.de ES Real-Time Systems Lab Prof. Dr. rer. nat. Andy Schürr Dept. of Electrical Engineering and Information Technology

More information

QoS-aware model-driven SOA using SoaML

QoS-aware model-driven SOA using SoaML QoS-aware model-driven SOA using SoaML Niels Schot A thesis submitted for the degree of MSc Computer Science University of Twente EEMCS - TRESE: Software Engineering Group Examination committee: Luís Ferreira

More information

Introduction to Dependable Systems: Meta-modeling and modeldriven

Introduction to Dependable Systems: Meta-modeling and modeldriven Introduction to Dependable Systems: Meta-modeling and modeldriven development http://d3s.mff.cuni.cz CHARLES UNIVERSITY IN PRAGUE faculty of mathematics and physics 3 Software development Automated software

More information

Compositional Model Based Software Development

Compositional Model Based Software Development Compositional Model Based Software Development Prof. Dr. Bernhard Rumpe http://www.se-rwth.de/ Seite 2 Our Working Groups and Topics Automotive / Robotics Autonomous driving Functional architecture Variability

More information

Comparison and merge use-cases from practice with EMF Compare

Comparison and merge use-cases from practice with EMF Compare Comparison and merge use-cases from practice with EMF Compare Laurent Delaigue Philip Langer EMF Compare Working with models Comparing text files EMF Compare Working with models Comparing models EMF Compare

More information

Future Directions for SysML v2 INCOSE IW MBSE Workshop January 28, 2017

Future Directions for SysML v2 INCOSE IW MBSE Workshop January 28, 2017 Future Directions for SysML v2 INCOSE IW MBSE Workshop January 28, 2017 Sanford Friedenthal safriedenthal@gmail.com 1/30/2017 Agenda Background System Modeling Environment (SME) SysML v2 Requirements Approach

More information

Integrating decision management with UML modeling concepts and tools

Integrating decision management with UML modeling concepts and tools Downloaded from orbit.dtu.dk on: Dec 17, 2017 Integrating decision management with UML modeling concepts and tools Könemann, Patrick Published in: Joint Working IEEE/IFIP Conference on Software Architecture,

More information

A Comparison of Ecore and GOPPRR through an Information System Meta Modeling Approach

A Comparison of Ecore and GOPPRR through an Information System Meta Modeling Approach A Comparison of Ecore and GOPPRR through an Information System Meta Modeling Approach Vladimir Dimitrieski, Milan Čeliković, Vladimir Ivančević and Ivan Luković University of Novi Sad, Faculty of Technical

More information

Traffic Analysis on Business-to-Business Websites. Masterarbeit

Traffic Analysis on Business-to-Business Websites. Masterarbeit Traffic Analysis on Business-to-Business Websites Masterarbeit zur Erlangung des akademischen Grades Master of Science (M. Sc.) im Studiengang Wirtschaftswissenschaft der Wirtschaftswissenschaftlichen

More information

AADL Graphical Editor Design

AADL Graphical Editor Design AADL Graphical Editor Design Peter Feiler Software Engineering Institute phf@sei.cmu.edu Introduction An AADL specification is a set of component type and implementation declarations. They are organized

More information

ADT: Eclipse development tools for ATL

ADT: Eclipse development tools for ATL ADT: Eclipse development tools for ATL Freddy Allilaire (freddy.allilaire@laposte.net) Tarik Idrissi (tarik.idrissi@laposte.net) Université de Nantes Faculté de Sciences et Techniques LINA (Laboratoire

More information

PubMed Assistant: A Biologist-Friendly Interface for Enhanced PubMed Search

PubMed Assistant: A Biologist-Friendly Interface for Enhanced PubMed Search Bioinformatics (2006), accepted. PubMed Assistant: A Biologist-Friendly Interface for Enhanced PubMed Search Jing Ding Department of Electrical and Computer Engineering, Iowa State University, Ames, IA

More information

Tools to Develop New Linux Applications

Tools to Develop New Linux Applications Tools to Develop New Linux Applications IBM Software Development Platform Tools for every member of the Development Team Supports best practices in Software Development Analyst Architect Developer Tester

More information

ATL: Atlas Transformation Language. ATL User Manual

ATL: Atlas Transformation Language. ATL User Manual ATL: Atlas Transformation Language ATL User Manual - version 0.7 - February 2006 by ATLAS group LINA & INRIA Nantes Content 1 Introduction... 1 2 An Introduction to Model Transformation... 2 2.1 The Model-Driven

More information

Customized UI Development Through Context-Sensitive GUI Patterns

Customized UI Development Through Context-Sensitive GUI Patterns Customized UI Development Through Context-Sensitive GUI Patterns Enes Yigitbas, Stefan Sauer Paderborn University, s-lab Software Quality Lab Abstract Developing highly flexible and easy to use user interfaces

More information

DSM model-to-text generation: from MetaDepth to Android with EGL

DSM model-to-text generation: from MetaDepth to Android with EGL DSM model-to-text generation: from MetaDepth to Android with EGL Rafael Ugaz Antwerp University (Belgium), rafaelugaz@gmail.com Abstract This article describes the process of a specific model-to-text transformation

More information

FXD A new exchange format for fault symptom descriptions

FXD A new exchange format for fault symptom descriptions FXD A new exchange format for fault symptom descriptions Helmut Wellnhofer, Matthias Stampfer, Michael Hedenus, Michael Käsbauer Abstract A new format called FXD (=Fault symptom exchange Description) was

More information

Modeling with UML, with semantics

Modeling with UML, with semantics ing with UML, with semantics Till Mossakowski Otto-von-Guericke-Universität Magdeburg Based on a course by Alexander Knapp, Universität Augsburg Overview -driven software design (MSDS) -driven architecture

More information

Scopus Development Focus

Scopus Development Focus 0 Scopus Development Focus Superior support of the scientific literature research process - on finding relevant articles quickly and investigating current research relationships through citation information

More information

Installing and Configuring Windows 10 MOC

Installing and Configuring Windows 10 MOC Installing and Configuring Windows 10 MOC 20697-1 In diesem 5-tägigen Seminar lernen Sie die Installation und Konfiguration von Windows-10-Desktops und -Geräten in einer Windows-Server- Domänenumgebung.

More information

Whole Platform Foundation. The Long Way Toward Language Oriented Programming

Whole Platform Foundation. The Long Way Toward Language Oriented Programming Whole Platform Foundation The Long Way Toward Language Oriented Programming 2008 by Riccardo Solmi made available under the Creative Commons License last updated 22 October 2008 Outline Aim: Engineering

More information

WEB SEARCH, FILTERING, AND TEXT MINING: TECHNOLOGY FOR A NEW ERA OF INFORMATION ACCESS

WEB SEARCH, FILTERING, AND TEXT MINING: TECHNOLOGY FOR A NEW ERA OF INFORMATION ACCESS 1 WEB SEARCH, FILTERING, AND TEXT MINING: TECHNOLOGY FOR A NEW ERA OF INFORMATION ACCESS BRUCE CROFT NSF Center for Intelligent Information Retrieval, Computer Science Department, University of Massachusetts,

More information

Introduction to EGF. Benoît Langlois / Thales Global Services.

Introduction to EGF. Benoît Langlois / Thales Global Services. www.thalesgroup.com Introduction to EGF Benoît Langlois / Thales Global Services 2 / Agenda Introduction EGF Architecture Concepts & Practice EGF Portfolios 3 / Agenda Introduction EGF Architecture Concepts

More information

Small is Beautiful Building a flexible software factory using small DSLs and Small Models

Small is Beautiful Building a flexible software factory using small DSLs and Small Models Small is Beautiful Building a flexible software factory using small DSLs and Small Models Jos Warmer Partner, Ordina jos.warmer@ordina.nl 1 Modeling Maturity Levels MML 0: No specification MML 1: Textual

More information

Automation of Semantic Web based Digital Library using Unified Modeling Language Minal Bhise 1 1

Automation of Semantic Web based Digital Library using Unified Modeling Language Minal Bhise 1 1 Automation of Semantic Web based Digital Library using Unified Modeling Language Minal Bhise 1 1 Dhirubhai Ambani Institute for Information and Communication Technology, Gandhinagar, Gujarat, India Email:

More information

MOMOCS D2.1 XIRUP S UPPORTING T OOLS R EQUIREMENTS. Model driven Modernisation of Complex Systems. Dissemination Level: Work package:

MOMOCS D2.1 XIRUP S UPPORTING T OOLS R EQUIREMENTS. Model driven Modernisation of Complex Systems. Dissemination Level: Work package: MOMOCS Model driven Modernisation of Complex Systems D2.1 XIRUP S UPPORTING T OOLS R EQUIREMENTS Dissemination Level: Work package: Lead Participant: Public WP2 ATOS Contractual Delivery Date: January

More information

Comparing graphical DSL editors

Comparing graphical DSL editors Comparing graphical DSL editors AToM 3 vs GMF & MetaEdit+ Nick Baetens Outline Introduction MetaEdit+ Specifications Workflow GMF Specifications Workflow Comparison 2 Introduction Commercial Written in

More information

Aspects of an XML-Based Phraseology Database Application

Aspects of an XML-Based Phraseology Database Application Aspects of an XML-Based Phraseology Database Application Denis Helic 1 and Peter Ďurčo2 1 University of Technology Graz Insitute for Information Systems and Computer Media dhelic@iicm.edu 2 University

More information

Model-Driven Engineering (MDE) Lecture 1: Metamodels and Xtext Regina Hebig, Thorsten Berger

Model-Driven Engineering (MDE) Lecture 1: Metamodels and Xtext Regina Hebig, Thorsten Berger Model-Driven Engineering (MDE) Lecture 1: Metamodels and Xtext Regina Hebig, Thorsten Berger Reuses some material from: Andrzej Wasowski, Model-Driven Development, ITU Copenhagen Where I am from WASP 2017

More information

Dictionary Driven Exchange Content Assembly Blueprints

Dictionary Driven Exchange Content Assembly Blueprints Dictionary Driven Exchange Content Assembly Blueprints Concepts, Procedures and Techniques (CAM Content Assembly Mechanism Specification) Author: David RR Webber Chair OASIS CAM TC January, 2010 http://www.oasis-open.org/committees/cam

More information

Enabling Component-Based Model Transformations with QVT. Li Dan

Enabling Component-Based Model Transformations with QVT. Li Dan Enabling Component-Based Model Transformations with QVT by Li Dan Doctor of Philosophy in Software Engineering 2013 Faculty of Science and Technology University of Macau Enabling Component-Based Model

More information

Sketch-based Metamodel Construction. Research Internship II Lucas Heer

Sketch-based Metamodel Construction. Research Internship II Lucas Heer Sketch-based Metamodel Construction Research Internship II Lucas Heer lucas.heer@student.uantwerpen.be 31.01.2018 Motivation 2 Motivation 3 Solution What if we start from instance models? 4 Solution 5

More information

Automatic Integration of Ecore Functionality into Java Code

Automatic Integration of Ecore Functionality into Java Code X =1.00 SD Software Design and Quality perf X loss=0.01 Automatic Integration of Ecore Functionality into Java Code Bachelor s Thesis of Timur Sağlam at the Department of Informatics Institute for Program

More information

Management of Complex Product Ontologies Using a Web-Based Natural Language Processing Interface

Management of Complex Product Ontologies Using a Web-Based Natural Language Processing Interface Management of Complex Product Ontologies Using a Web-Based Natural Language Processing Interface Master Thesis Final Presentation A B M Junaed, 11.07.2016 Software Engineering for Business Information

More information

Winery A Modeling Tool for TOSCA-Based Cloud Applications

Winery A Modeling Tool for TOSCA-Based Cloud Applications Winery A Modeling Tool for TOSCA-Based Cloud Applications Oliver Kopp 1,2, Tobias Binz 2,UweBreitenbücher 2, and Frank Leymann 2 1 IPVS, University of Stuttgart, Germany 2 IAAS, University of Stuttgart,

More information

Modern and Responsive Mobile-enabled Web Applications

Modern and Responsive Mobile-enabled Web Applications Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 110 (2017) 410 415 The 12th International Conference on Future Networks and Communications (FNC-2017) Modern and Responsive

More information

TWO APPROACHES IN SYSTEM MODELING AND THEIR ILLUSTRATIONS WITH MDA AND RM-ODP

TWO APPROACHES IN SYSTEM MODELING AND THEIR ILLUSTRATIONS WITH MDA AND RM-ODP TWO APPROACHES IN SYSTEM MODELING AND THEIR ILLUSTRATIONS WITH MDA AND RM-ODP Andrey Naumenko, Alain Wegmann Laboratory of Systemic Modeling, Swiss Federal Institute of Technology - Lausanne, EPFL-I&C-LAMS,1015

More information

ABAP DSL Workbench SAP TechED 2016

ABAP DSL Workbench SAP TechED 2016 ABAP DSL Workbench SAP TechED 2016 Barcelona, November 2016-0 - Hello. Hello. Example Asia Diner Yes? Number 77. Take away? No. Hello. Hello. Hello. Hello. As always? Yes. As always? Yes. Where are the

More information

Automatic Merging of Specification Documents in a Parallel Development Environment

Automatic Merging of Specification Documents in a Parallel Development Environment Automatic Merging of Specification Documents in a Parallel Development Environment Rickard Böttcher Linus Karnland Department of Computer Science Lund University, Faculty of Engineering December 16, 2008

More information

Describing Computer Languages

Describing Computer Languages Markus Scheidgen Describing Computer Languages Meta-languages to describe languages, and meta-tools to automatically create language tools Doctoral Thesis August 10, 2008 Humboldt-Universität zu Berlin

More information

Christian Doppler Laboratory

Christian Doppler Laboratory Christian Doppler Laboratory Software Engineering Integration For Flexible Automation Systems AutomationML Models (in EMF and EA) for Modelers and Software Developers Emanuel Mätzler Institute of Software

More information

ADD 3.0: Rethinking Drivers and Decisions in the Design Process

ADD 3.0: Rethinking Drivers and Decisions in the Design Process ADD 3.0: Rethinking Drivers and Decisions in the Design Process Rick Kazman Humberto Cervantes SATURN 2015 Outline Presentation Architectural design and types of drivers The Attribute Driven Design Method

More information

Transformational Design with

Transformational Design with Fakultät Informatik, Institut für Software- und Multimediatechnik, Lehrstuhl für Softwaretechnologie Transformational Design with Model-Driven Architecture () Prof. Dr. U. Aßmann Technische Universität

More information

IBM Rational Software Architect

IBM Rational Software Architect Unifying all aspects of software design and development IBM Rational Software Architect A complete design & development toolset Incorporates all the capabilities in IBM Rational Application Developer for

More information

Pattern-Oriented Development with Rational Rose

Pattern-Oriented Development with Rational Rose Pattern-Oriented Development with Rational Rose Professor Peter Forbrig, Department of Computer Science, University of Rostock, Germany; Dr. Ralf Laemmel, Department of Information Management and Software

More information

Semantics-Based Integration of Embedded Systems Models

Semantics-Based Integration of Embedded Systems Models Semantics-Based Integration of Embedded Systems Models Project András Balogh, OptixWare Research & Development Ltd. n 100021 Outline Embedded systems overview Overview of the GENESYS-INDEXYS approach Current

More information

EMC Documentum Composer

EMC Documentum Composer EMC Documentum Composer Version 6.0 SP1.5 User Guide P/N 300 005 253 A02 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748 9103 1 508 435 1000 www.emc.com Copyright 2008 EMC Corporation. All

More information

Master Thesis: ESB Based Automated EA Documentation

Master Thesis: ESB Based Automated EA Documentation Fakultät für Informatik Technische Universität München Master Thesis: ESB Based Automated EA Documentation Final presentation Student: Supervisor: Advisors: Sebastian Grunow Prof. Pontus Johnson Markus

More information

Model-Level Integration of the OCL Standard Library Using a Pivot Model with Generics Support

Model-Level Integration of the OCL Standard Library Using a Pivot Model with Generics Support Faculty of Computer Science, Institute for Software- and Multimedia-Technology, Chair for Software Technology Matthias Bräuer and Birgit Demuth Model-Level Integration of the Using a Pivot Model with Generics

More information

Proposed Revisions to ebxml Technical. Architecture Specification v1.04

Proposed Revisions to ebxml Technical. Architecture Specification v1.04 Proposed Revisions to ebxml Technical Architecture Specification v1.04 Business Process Team 11 May 2001 (This document is the non-normative version formatted for printing, July 2001) Copyright UN/CEFACT

More information

with openarchitectureware

with openarchitectureware Model-Driven Development with openarchitectureware Markus Völter voelter@acm.orgorg www.voelter.de Sven Efftinge sven@efftinge.de www.efftinge.de Bernd Kolb bernd@kolbware.de www.kolbware.de 2006-7 Völter,

More information

Leveraging the Social Web for Situational Application Development and Business Mashups

Leveraging the Social Web for Situational Application Development and Business Mashups Leveraging the Social Web for Situational Application Development and Business Mashups Stefan Tai stefan.tai@kit.edu www.kit.edu About the Speaker: Stefan Tai Professor, KIT (Karlsruhe Institute of Technology)

More information

Introduction to Model Driven Engineering using Eclipse. Frameworks

Introduction to Model Driven Engineering using Eclipse. Frameworks Introduction to Model Driven Engineering using Eclipse Model Driven Development Generator s Bruce Trask Angel Roman MDE Systems Abstraction Model Driven Development Refinement 1 Part I Agenda What is Model

More information

MDD with OMG Standards MOF, OCL, QVT & Graph Transformations

MDD with OMG Standards MOF, OCL, QVT & Graph Transformations 1 MDD with OMG Standards MOF, OCL, QVT & Graph Transformations Andy Schürr Darmstadt University of Technology andy. schuerr@es.tu-darmstadt.de 20th Feb. 2007, Trento Outline of Presentation 2 Languages

More information

Instance Specialization a Pattern for Multi-level Meta Modelling

Instance Specialization a Pattern for Multi-level Meta Modelling Instance Specialization a Pattern for Multi-level Meta Modelling Matthias Jahn, Bastian Roth and Stefan Jablonski Chair for Applied Computer Science IV: Databases and Information Systems University of

More information

AUTOMATED BEHAVIOUR REFINEMENT USING INTERACTION PATTERNS

AUTOMATED BEHAVIOUR REFINEMENT USING INTERACTION PATTERNS MASTER THESIS AUTOMATED BEHAVIOUR REFINEMENT USING INTERACTION PATTERNS C.J.H. Weeïnk FACULTY OF ELECTRICAL ENGINEERING, MATHEMATICS AND COMPUTER SCIENCE SOFTWARE ENGINEERING EXAMINATION COMMITTEE dr.

More information

Towards End-User Adaptable Model Versioning: The By-Example Operation Recorder

Towards End-User Adaptable Model Versioning: The By-Example Operation Recorder Towards End-User Adaptable Model Versioning: The By-Example Operation Recorder Petra Brosch, Philip Langer, Martina Seidl, and Manuel Wimmer Institute of Software Technology and Interactive Systems Vienna

More information

Model-Based Social Networking Over Femtocell Environments

Model-Based Social Networking Over Femtocell Environments Proc. of World Cong. on Multimedia and Computer Science Model-Based Social Networking Over Femtocell Environments 1 Hajer Berhouma, 2 Kaouthar Sethom Ben Reguiga 1 ESPRIT, Institute of Engineering, Tunis,

More information

Performing searches on Érudit

Performing searches on Érudit Performing searches on Érudit Table of Contents 1. Simple Search 3 2. Advanced search 2.1 Running a search 4 2.2 Operators and search fields 5 2.3 Filters 7 3. Search results 3.1. Refining your search

More information

Patent documents usecases with MyIntelliPatent. Alberto Ciaramella IntelliSemantic 25/11/2012

Patent documents usecases with MyIntelliPatent. Alberto Ciaramella IntelliSemantic 25/11/2012 Patent documents usecases with MyIntelliPatent Alberto Ciaramella IntelliSemantic 25/11/2012 Objectives and contents of this presentation This presentation: identifies and motivates the most significant

More information

Ylvi - Multimedia-izing the Semantic Wiki

Ylvi - Multimedia-izing the Semantic Wiki Ylvi - Multimedia-izing the Semantic Wiki Niko Popitsch 1, Bernhard Schandl 2, rash miri 1, Stefan Leitich 2, and Wolfgang Jochum 2 1 Research Studio Digital Memory Engineering, Vienna, ustria {niko.popitsch,arash.amiri}@researchstudio.at

More information

Vocabulary-Driven Enterprise Architecture Development Guidelines for DoDAF AV-2: Design and Development of the Integrated Dictionary

Vocabulary-Driven Enterprise Architecture Development Guidelines for DoDAF AV-2: Design and Development of the Integrated Dictionary Vocabulary-Driven Enterprise Architecture Development Guidelines for DoDAF AV-2: Design and Development of the Integrated Dictionary December 17, 2009 Version History Version Publication Date Author Description

More information

Extension and integration of i* models with ontologies

Extension and integration of i* models with ontologies Extension and integration of i* models with ontologies Blanca Vazquez 1,2, Hugo Estrada 1, Alicia Martinez 2, Mirko Morandini 3, and Anna Perini 3 1 Fund Information and Documentation for the industry

More information

Roles in Software Development using Domain Specific Modelling Languages

Roles in Software Development using Domain Specific Modelling Languages Roles in Software Development using Domain Specific Modelling Languages Holger Krahn Bernhard Rumpe Steven Völkel Institute for Software Systems Engineering Technische Universität Braunschweig, Braunschweig,

More information

BPMN to BPEL case study solution in VIATRA2

BPMN to BPEL case study solution in VIATRA2 BPMN to BPEL case study solution in VIATRA2 Gábor Bergmann and Ákos Horváth Budapest University of Technology and Economics, Department of Measurement and Information Systems, H-1117 Magyar tudósok krt.

More information

Platform-Independent UI Models: Extraction from UI Prototypes and rendering as W3C Web Components

Platform-Independent UI Models: Extraction from UI Prototypes and rendering as W3C Web Components Platform-Independent UI Models: Extraction from UI Prototypes and rendering as W3C Web Components Marvin Aulenbacher, 19.06.2017, Munich Chair of Software Engineering for Business Information Systems (sebis)

More information

challenges in domain-specific modeling raphaël mannadiar august 27, 2009

challenges in domain-specific modeling raphaël mannadiar august 27, 2009 challenges in domain-specific modeling raphaël mannadiar august 27, 2009 raphaël mannadiar challenges in domain-specific modeling 1/59 outline 1 introduction 2 approaches 3 debugging and simulation 4 differencing

More information

3rd Lecture Languages for information modeling

3rd Lecture Languages for information modeling 3rd Lecture Languages for information modeling Agenda Languages for information modeling UML UML basic concepts Modeling by UML diagrams CASE tools: concepts, features and objectives CASE toolset architecture

More information

18.1 user guide No Magic, Inc. 2015

18.1 user guide No Magic, Inc. 2015 18.1 user guide No Magic, Inc. 2015 All material contained herein is considered proprietary information owned by No Magic, Inc. and is not to be shared, copied, or reproduced by any means. All information

More information

Variability differences among products in PL. Variability in PLE. Language Workbenches. Language Workbenches. Product Line Engineering

Variability differences among products in PL. Variability in PLE. Language Workbenches. Language Workbenches. Product Line Engineering PPL 2009 Keynote Markus Voelter Indepenent/itemis voelter@acm.org http://www.voelter.de Language Workbenches in Product Line Engineering Variability in PLE Language Workbenches in Domain Specific Languages

More information

Coral: A Metamodel Kernel for Transformation Engines

Coral: A Metamodel Kernel for Transformation Engines Coral: A Metamodel Kernel for Transformation Engines Marcus Alanen and Ivan Porres TUCS Turku Centre for Computer Science Department of Computer Science, Åbo Akademi University Lemminkäisenkatu 14, FIN-20520

More information

A Lightweight Language for Software Product Lines Architecture Description

A Lightweight Language for Software Product Lines Architecture Description A Lightweight Language for Software Product Lines Architecture Description Eduardo Silva, Ana Luisa Medeiros, Everton Cavalcante, Thais Batista DIMAp Department of Informatics and Applied Mathematics UFRN

More information

Proposed Revisions to ebxml Technical Architecture Specification v ebxml Business Process Project Team

Proposed Revisions to ebxml Technical Architecture Specification v ebxml Business Process Project Team 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 Proposed Revisions to ebxml Technical Architecture Specification v1.0.4 ebxml Business Process Project Team 11

More information

TERRA support for architecture modeling. K.J. (Karim) Kok. MSc Report. C e Dr.ir. J.F. Broenink Z. Lu, MSc Prof.dr.ir. A. Rensink.

TERRA support for architecture modeling. K.J. (Karim) Kok. MSc Report. C e Dr.ir. J.F. Broenink Z. Lu, MSc Prof.dr.ir. A. Rensink. TERRA support for architecture modeling K.J. (Karim) Kok MSc Report C e Dr.ir. J.F. Broenink Z. Lu, MSc Prof.dr.ir. A. Rensink August 2016 040RAM2016 EE-Math-CS P.O. Box 217 7500 AE Enschede The Netherlands

More information

Semantic Web Domain Knowledge Representation Using Software Engineering Modeling Technique

Semantic Web Domain Knowledge Representation Using Software Engineering Modeling Technique Semantic Web Domain Knowledge Representation Using Software Engineering Modeling Technique Minal Bhise DAIICT, Gandhinagar, Gujarat, India 382007 minal_bhise@daiict.ac.in Abstract. The semantic web offers

More information