is easing the creation of new ontologies by promoting the reuse of existing ones and automating, as much as possible, the entire ontology

Similar documents
CIS 1.5 Course Objectives. a. Understand the concept of a program (i.e., a computer following a series of instructions)

SOFTWARE ARCHITECTURE & DESIGN INTRODUCTION

1 Executive Overview The Benefits and Objectives of BPDM

Software Architectures

Chapter 8: Enhanced ER Model

Unit 1 Introduction to Software Engineering

The Role of Algorithms in Computing

Component-Based Software Engineering TIP

INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING & TECHNOLOGY (IJCET) NEED FOR DESIGN PATTERNS AND FRAMEWORKS FOR QUALITY SOFTWARE DEVELOPMENT

Remotely Sensed Image Processing Service Automatic Composition

Preface. This Book and Simulation Software Bundle Project

CHAPTER 4 HEURISTICS BASED ON OBJECT ORIENTED METRICS

Component-Based Software Engineering TIP

UNIT II Requirements Analysis and Specification & Software Design

3.4 Deduction and Evaluation: Tools Conditional-Equational Logic

X-KIF New Knowledge Modeling Language

Contents. Chapter 1 SPECIFYING SYNTAX 1

Lecture 9: Load Balancing & Resource Allocation

Feature-Oriented Domain Analysis (FODA) Feasibility Study

Fundamentals of Discrete Mathematical Structures

Building the NNEW Weather Ontology

Model Driven Engineering (MDE)

DITA for Enterprise Business Documents Sub-committee Proposal Background Why an Enterprise Business Documents Sub committee

CAD/CAPP Integration using Feature Ontology

Recommended Practice for Software Requirements Specifications (IEEE)

Analysis Package White Paper. ADM Task Force January 2006

Semantics-Based Integration of Embedded Systems Models

An Architecture for Semantic Enterprise Application Integration Standards

Agile Model-Driven Development with UML 2.0 SCOTT W. AM BLER. Foreword by Randy Miller UNIFIED 1420 MODELING LANGUAGE. gile 1.

H1 Spring B. Programmers need to learn the SOAP schema so as to offer and use Web services.

Implementation Techniques

Guarantee permanent Model/Code consistency:

Semantic Web. Ontology Engineering and Evaluation. Morteza Amini. Sharif University of Technology Fall 95-96

Ontology Creation and Development Model

Is Power State Table Golden?

Variability Implementation Techniques for Platforms and Services (Interim)

Ontology-based Architecture Documentation Approach

Axiomatic Specification. Al-Said, Apcar, Jerejian

The following topics will be covered in this course (not necessarily in this order).

KNOWLEDGE MANAGEMENT VIA DEVELOPMENT IN ACCOUNTING: THE CASE OF THE PROFIT AND LOSS ACCOUNT

Generalized Document Data Model for Integrating Autonomous Applications

Treewidth and graph minors

Formal Systems and their Applications

Automation of Semantic Web based Digital Library using Unified Modeling Language Minal Bhise 1 1

LABORATORY 1 REVISION

Modeling Issues Modeling Enterprises. Modeling

Raytheon Mission Architecture Program (RayMAP) Topic 1: C2 Concepts, Theory, and Policy Paper #40

The architecture of Eiffel software 3.1 OVERVIEW classes clusters systems

SEMANTIC SOLUTIONS FOR OIL & GAS: ROLES AND RESPONSIBILITIES

A Business Aware Transaction Framework for Service Oriented Environments

THE FOUNDATIONS OF MATHEMATICS

The Software Design Process. CSCE 315 Programming Studio, Fall 2017 Tanzir Ahmed

Automatic Generation of Workflow Provenance

Personalized Navigation in the Semantic Web

Models versus Ontologies - What's the Difference and where does it Matter?

Information Integration

Genetic Programming: A study on Computer Language

Lecture 11 Lecture 11 Nov 5, 2014

Sparse Matrices Reordering using Evolutionary Algorithms: A Seeded Approach

GoldSim: Using Simulation to Move Beyond the Limitations of Spreadsheet Models

Let denote the number of partitions of with at most parts each less than or equal to. By comparing the definitions of and it is clear that ( ) ( )

Computational Discrete Mathematics

AOSA - Betriebssystemkomponenten und der Aspektmoderatoransatz

Genetic Programming. and its use for learning Concepts in Description Logics

Meta-programming with Names and Necessity p.1

Unit-3 Software Design (Lecture Notes)

Semantic Web. Ontology Pattern. Gerd Gröner, Matthias Thimm. Institute for Web Science and Technologies (WeST) University of Koblenz-Landau

Smart Open Services for European Patients. Work Package 3.5 Semantic Services Definition Appendix E - Ontology Specifications

Understanding the Concepts and Features of Macro Programming 1

Reducing Consumer Uncertainty

Component-Level Design. Slides copyright 1996, 2001, 2005, 2009 by Roger S. Pressman. For non-profit educational use only

Component Design. Systems Engineering BSc Course. Budapest University of Technology and Economics Department of Measurement and Information Systems

Designing a System Engineering Environment in a structured way

Creating and Maintaining Vocabularies

Java Learning Object Ontology

Proposal for Implementing Linked Open Data on Libraries Catalogue

Context-Awareness and Adaptation in Distributed Event-Based Systems

System Design and Modular Programming

3.4 Data-Centric workflow

Module 3. Requirements Analysis and Specification. Version 2 CSE IIT, Kharagpur

Module 16. Software Reuse. Version 2 CSE IIT, Kharagpur

Matching Algorithms. Proof. If a bipartite graph has a perfect matching, then it is easy to see that the right hand side is a necessary condition.

NeOn Methodology for Building Ontology Networks: a Scenario-based Methodology

Toward a Knowledge-Based Solution for Information Discovery in Complex and Dynamic Domains

Software Lifecycle Context (Waterfall Model) Software Requirements. The Requirements Engineering Problem

The Running Time of Programs

Communication-Based Design

Design Patterns. Observations. Electrical Engineering Patterns. Mechanical Engineering Patterns

UniLFS: A Unifying Logical Framework for Service Modeling and Contracting

It Is What It Does: The Pragmatics of Ontology for Knowledge Sharing

Text Document Clustering Using DPM with Concept and Feature Analysis

Review Sources of Architecture. Why Domain-Specific?

Configuration Management for Component-based Systems

DOWNLOAD PDF BIG IDEAS MATH VERTICAL SHRINK OF A PARABOLA

European Conference on Quality and Methodology in Official Statistics (Q2008), 8-11, July, 2008, Rome - Italy

Helmi Ben Hmida Hannover University, Germany

Computation Independent Model (CIM): Platform Independent Model (PIM): Platform Specific Model (PSM): Implementation Specific Model (ISM):

PRESENTATION SLIDES FOR PUN-LOSP William Bricken July Interface as Overview

Semantics in the Financial Industry: the Financial Industry Business Ontology

jcel: A Modular Rule-based Reasoner

Transcription:

Preface The idea of improving software quality through reuse is not new. After all, if software works and is needed, just reuse it. What is new and evolving is the idea of relative validation through testing and reuse and the abstraction of code into frameworks for instantiation and reuse. Literal code can be abstracted. These abstractions can be made to yield similar codes, which serve to verify their patterns. There is a taxonomy of representations from the lowest level literal codes to their highest level natural language descriptions. The quality of software is improved in proportion to the degree of all levels of reuse. Any software, which in theory is complex enough to allow for self-reference, cannot be assured to be absolutely valid. The best that can be attained is a relative validity, which is based on testing. Axiomatic, denotational, and other program semantics are more difficult to verify than the codes, which they represent! But, is there any limit to testing, and how does one maximize the reliability of software through testing? These are the questions, which need to be asked. Here are the much sought-after answers. Randomization theory implies that software should be broken into small coherent modules, which are logically integrated into the whole. These modules are designed to facilitate specification and may be optimized through the use of equivalence transformations. Moreover, symmetric testing (e.g., testing a sort routine using (3, 2, 1) (4, 3, 2, 1) (5, 4, 3, 2, 1) ) has minimal value beyond the first test case. Rather, the test cases need to cover all combinations and execution paths, which defines random-basis testing (e.g., (3, 2, 1) (3, 1, 2) (1) (2, 1, 2) ). It is virtually impossible to generate a complete random-basis test for complex software. Hence, reuse in diverse applications serves as a substitute. Also, the more coherent the software modules, the greater their potential for reuse and integration, and the proportionately less the propensity for software bugs to survive as a consequence. Such an approach implies not only the use of integration testing, but the definition and application of true optimizing compilers as well. For example, routines for reading data, iteratively bubbling the data to the list head to create a lexicographic order, and printing the result of the iterations (e.g., the O (n**2) bubble sort) can be iteratively transformed, by an optimizing compiler, v

vi Preface into an O (n log n) sort (e.g., Quicksort). Such coherent optimizations may be self-referentially applied to the optimizing compilers themselves for further efficiencies of scale. The aforementioned ways for optimizing literal software quality extend to all of its abstract patterns only here testing applies to framework instances. It can be seen that software quality is properly dependent upon the scale of the effort. Scale, in turn, is defined by the embodied knowledge (i.e., including frameworks) and processor power, which can be brought to bear. Randomization theory and applicative experience converge as software is written for coherency, integrated for functionality, optimized for efficiency, executed to realize its latent potential for massive parallelism, and reused in diverse applications to maximize its validity (i.e., through random-basis testing). One final note is appropriate. It should be possible to create domain-general higher-level (i.e., context-sensitive) languages through the inclusion of dialog, for disambiguation, and the capture of knowledge in the form of software frameworks. The higher the level of software, the more reusable it is on account of one central human factor readability. That is why components are reusable because their descriptive characterizations are understandable. This also follows from randomization theory although this theory does not address how to achieve such context-sensitive languages in practice only the result of doing so. Combining the key points herein leads to the inescapable conclusion that software productivity including quality attributes is not bounded, combines the best of theory and practice, and when realized, as described, will transform the software industry, as we know it, for the better. The traveling salesman problem (TSP) is one of the most studied problems in optimization. If the optimal solution to the TSP can be found in polynomial time, it would then follow that every NP-hard problem could be solved in polynomial time, proving P=NP. In Chapter 1, it will be shown that the proposed algorithm finds P*NP with scale. Machine learning through self-randomization is demonstrated in the solution of the TSP. It is argued that self-randomizing knowledge bases will lead to the creation of a synthetic intelligence, which enables cyber-secure software automation. The work presented in Chapter 2 pertains to knowledge generalization based on randomization. Inductive knowledge is inferred through transmutation rules. A domain-specific approach is properly formalized to deal with the transmutation rules. Randomized knowledge is validated based on the domain user expertise. In order to improve software component reuse, Chapter 3 provides a mechanism, based on context-sensitive code snippets, for retrieving components and showing how the retrieved components can be instantiated and reused. This approach utilizes semantic modeling and ontology formalisms in order to conceptualize and reverse engineer the hidden knowledge in library codes. Ontologies are, indeed, a standard in representing and sharing knowledge. Chapter 4 presents a comprehensive methodology for ontology integration and reuse based on various matching techniques. The approach is supported by an ad hoc software framework, whose scope

Preface vii is easing the creation of new ontologies by promoting the reuse of existing ones and automating, as much as possible, the entire ontology construction procedure. Given ever-increasing data storage requirements, many distinct classifiers have been proposed for different data types. However, the efficient integration of these classifiers remains a challenging research topic. In Chapter 5, a novel scalable framework is proposed for a classifier ensemble using a set of generated adjudications based on training and validation results. These adjudicators are ranked and assembled as a hierarchically structured decision model. Chapter 6 presents a graph-based solution for the integration of different data stores using a homogeneous representation. Schemas are transformed and merged over property graphs, providing a modular framework. In chapter 7, heterogeneous terminologies are integrated into a categorytheoretic model of faceted browsing. Faceted browsing systems are commonly found in online search engines and digital libraries. It is shown that existing terminologies and vocabularies can be reused as facets in a cohesive, interactive system. Controlled vocabularies or terminologies are often externally curated and are available as a reusable resource across systems. The compositional reuse of software libraries is important for productivity. However, the duplication and modification of software specifications, for their adaptation, leads to poor maintainability and technical debt. Chapter 8 proposes a system that solves these problems and enables the compositional reuse of software specifications independent of the choice of specification languages and tools. As the complexity of the world and human interaction grows, contracts are necessarily becoming more complex. A promising approach for addressing this ever-increasing complexity consists in the use of a language having a precise semantics thus providing a formal basis for integrating precise methods into contracts. Chapter 9 outlines a formal language for defining general contracts, which may depend on temporally based conditions. Chapter 10 outlines the design of a system modeling language in order to provide a framework for prototyping complex distributed system protocols. It further highlights how its primitives are well-matched with concerns, which naturally arise during distributed system design. An operational semantics for the designed language, as well as results on a variety of state-space exploration techniques, is presented. Chapter 11 addresses the integration of formalisms within the behavior-driven development tooling, which is built upon semi-formal mediums for specifying the behavior of a system as it would be externally observed. It presents a new strategy for test case generation. Chapter 12 focuses upon model checking Z specifications. A Z specification is preprocessed, where a generic constant is redefined as an equivalent axiom, and a schema calculus is expanded to a new schema definition. Chapter 13 presents a parameterized logic to express nominal and erroneous behaviors including faults modeling, given algebra and a set of operational modes.

viii Preface The created logic is intended to provide analysts with assistance in considering all possible situations, which may arise in complex expressions having order-related operators. It enables analysts to avoid missing subtle (but relevant) combinations. January 2017 Stuart H. Rubin Thouraya Bouabana-Tebibel

http://www.springer.com/978-3-319-56156-1