Representing Symbolic Reasoning

Similar documents
Functional Programming. Big Picture. Design of Programming Languages

One of the most important areas where quantifier logic is used is formal specification of computer programs.

Cognitive Walkthrough. Francesca Rizzo 24 novembre 2004

Towards Generating Domain-Specific Model Editors with Complex Editing Commands

Theoretical Computer Science

Object-Oriented Design גרא וייס המחלקה למדעי המחשב אוניברסיטת בן-גוריון

Programming II (CS300)

Category Theory in Ontology Research: Concrete Gain from an Abstract Approach

Formal semantics of loosely typed languages. Joep Verkoelen Vincent Driessen

Lecture Notes on Contracts

Software Engineering: Integration Requirements

Modeling Systems Using Design Patterns

CSCI B522 Lecture 11 Naming and Scope 8 Oct, 2009

Nonmonotonic Databases and Epistemic Queries*

6.001 Notes: Section 17.5

Scan and its Uses. 1 Scan. 1.1 Contraction CSE341T/CSE549T 09/17/2014. Lecture 8

Handout 9: Imperative Programs and State

An expert network simulation and design system

A taxonomy of race. D. P. Helmbold, C. E. McDowell. September 28, University of California, Santa Cruz. Santa Cruz, CA

Organizing Information. Organizing information is at the heart of information science and is important in many other

Simulating Task Models Using Concrete User Interface Components

A Theory of Parallel Computation The π-calculus

A Study of Future Internet Applications based on Semantic Web Technology Configuration Model

SOME TYPES AND USES OF DATA MODELS

6.001 Notes: Section 8.1

Optimizing Closures in O(0) time

CS /534 Compiler Construction University of Massachusetts Lowell. NOTHING: A Language for Practice Implementation

STABILITY AND PARADOX IN ALGORITHMIC LOGIC

Subject: Scheduling Region Questions and Problems of new SystemVerilog commands

Uncertain Data Models

Star Decompositions of the Complete Split Graph

ebook library PAGE 1 HOW TO OPTIMIZE TRANSLATIONS AND ACCELERATE TIME TO MARKET

1 Lexical Considerations

NOTES ON OBJECT-ORIENTED MODELING AND DESIGN

2. Neural network basics

Symbolic Execution and Proof of Properties

15 Unification and Embedded Languages in Lisp

Annotation Science From Theory to Practice and Use Introduction A bit of history

Fundamentals to Creating Architectures using ISO/IEC/IEEE Standards

Domain Specific Search Engine for Students

COP4020 Programming Languages. Functional Programming Prof. Robert van Engelen

Arithmetic Processing

C++ Data Types. 1 Simple C++ Data Types 2. 3 Numeric Types Integers (whole numbers) Decimal Numbers... 5

Instances and Classes. SOFTWARE ENGINEERING Christopher A. Welty David A. Ferrucci. 24 Summer 1999 intelligence

COP 4516: Math for Programming Contest Notes

Handout 10: Imperative programs and the Lambda Calculus

One-Point Geometric Crossover

Chapter 9. Software Testing

Actionable User Intentions for Real-Time Mobile Assistant Applications

Types of recursion. Structural vs. general recursion. Pure structural recursion. Readings: none. In this module: learn to use accumulative recursion

Fundamentals of the J Programming Language

Introducing MESSIA: A Methodology of Developing Software Architectures Supporting Implementation Independence

Two Comments on the Principle of Revealed Preference

CSC 533: Programming Languages. Spring 2015

An Architecture for Semantic Enterprise Application Integration Standards

CS 6402 DESIGN AND ANALYSIS OF ALGORITHMS QUESTION BANK

UC Irvine UC Irvine Previously Published Works

Lecture 1. Introduction

FIGURE 3. Two-Level Internet Address Structure. FIGURE 4. Principle Classful IP Address Formats

6. Relational Algebra (Part II)

Categorizing Migrations

Part (04) Introduction to Programming

3.7 Denotational Semantics

With IBM BPM 8.5.5, the features needed to express both BPM solutions and case management oriented solutions comes together in one offering.

APPLICATION OF A METASYSTEM IN UNIVERSITY INFORMATION SYSTEM DEVELOPMENT

P1 Engineering Computation

S E C T I O N O V E R V I E W

JOURNAL OF OBJECT TECHNOLOGY

Separating Product Variance and Domain Concepts in the Specification of Software Product Lines

Configuration management for Lyee software

Testing a Set of Image Processing Operations for Completeness

6.871 Expert System: WDS Web Design Assistant System

A SCHEME UNIT-TESTING FRAMEWORK

Lecture Notes on Liveness Analysis

Visual Design. Simplicity, Gestalt Principles, Organization/Structure

INCONSISTENT DATABASES

A prototype system for argumentation-based reasoning about trust

Lecture 8. 1 Some More Security Definitions for Encryption Schemes

Project Description Introduction Problem Statement

Multi Agent System-Based on Case Based Reasoning for Cloud Computing System

Specifying and Proving Broadcast Properties with TLA

Understanding and Exploring Memory Hierarchies

Functional Programming. Pure Functional Languages

Chapter 2 Overview of the Design Methodology

Towards a formal model of object-oriented hyperslices

Lecture 3: Some Strange Properties of Fractal Curves

Bootstrap Confidence Intervals for Regression Error Characteristic Curves Evaluating the Prediction Error of Software Cost Estimation Models

A Small Interpreted Language

PROBLEM SOLVING TECHNIQUES SECTION - A. 1. Answer any ten of the following

THE FOUNDATIONS OF MATHEMATICS

6.001 Notes: Section 15.1

SFWR ENG 3S03: Software Testing

ELEMENTARY NUMBER THEORY AND METHODS OF PROOF

Cpt S 122 Data Structures. Course Review Midterm Exam # 2

This is already grossly inconvenient in present formalisms. Why do we want to make this convenient? GENERAL GOALS

A Comparison of the Booch Method and Shlaer-Mellor OOA/RD

An Approach to Software Component Specification

Metamodeling for Business Model Design

Macros & Streams Spring 2018 Discussion 9: April 11, Macros

Answer Sets and the Language of Answer Set Programming. Vladimir Lifschitz

Transcription:

Representing Symbolic Reasoning Brian Mastenbrook and Eric Berkowitz 1400 N. Roosevelt Blvd. Schaumburg, IL 60173 chandler@acm.roosevelt.edu eric@cs.roosevelt.edu Abstract Introspection is a fundamental component of how we as humans reason, learn, and adapt. However, many existing computer reasoning systems exclude the possibility of introspection because their design does not allow a representation of their own reasoning procedures as data. Using a model of reasoning based on observable effect it is possible to test the abilitly of any given data structure to represent reasoning. Through such a model we present a minimal data structure necessary to record a computable reasoning process and define the operations that can be performed on this representation to facilitate computer reasoning. This model facilitates the introduction and development basic operations which perform reasoning tasks using data recorded in this format. Through this formal description of the structures and operations necessary to facilitate reasoning on and application of stored reasoning procedures provides a framework through which provable assertions about the nature and limits of symbolic reasoning can be made. Introduction The process of introspection is a fundamental component of human reasoning. Through the observation of our own reasoning process we are able to determine which processes eventually lead to desired goals and which processes do not produce s. Without this ability to reason about our own reasoning processes and experiences, faulty reasoning processes would remain undiscovered or unmodified. Our introspective abilities also provide the basis for our own ability to communicate about our reasoning - without this ability, our reasoning process would seem to us to be opaque and therefore incommunicable. Introspective reasoning for AI is a process which has been defined in many ways. In a logical formal system, introspection is meta-reasoning - the process of forming assertions about the behavior of one s own reasoning process (McCarthy 1995). In traditional Case-Based Reasoning (), introspection has been modeled by the addition of a separate reasoning engine which contains a model of the ideal performance of the domain reasoning system and which can modify the domain reasoning system s behavior to optimize its performance (Fox 1995). Thus, these systems can be seen has being composed of two systems - one whose domain is the original target domain of the system, and the other whose domain is reasoning procedures which operate on the target domain (Figure 1). Domain Introspective Figure 1: Introspective systems are composed of two components. In introspective systems which are defined in this fashion, the introspective portion of the reasoning system can only operate on a specific domain - the domain portion of the system (Leake 1995; David B. Leake & Wilson 1995). In order to construct a system which can introspect about its own introspective reasoning behavior, a third level of would need to be added (Figure 2). This third reasoning engine now operates on reasoning procedures which operate on reasoning procedures which operate on the target domain. The two introspective components of this system are quite similar in purpose and operation; however, because traditional contains a model of the domain as an integral component of the reasoning engine, each layer must be distinct. Domain Introspective Introspective Figure 2: A hypothetical system which introspects about its own introspection. In a system which does not use domain knowledge as part of its implemented reasoning procedure (Berkowitz 2001), this need for the separation of the introspective components of the system from the domain portion of the system is potentially removable. In order for this to occur, the organization of the data and the construction of the domain-independent reasoning procedures must support the process of reasoning about the

system s own stored reasoning procedure. We present here a formally specified data structure which is capable of recording reasoning procedures which are complex enough to support introspective reasoning. This formal organization of data is designed to support the development of algorithms which are capable of reasoning on recorded reasoning data. We then show how this data structure can be used by algorithms to perform reasoning procedures which model basic introspective reasoning tasks. The Basis of Introspective Reasoning Defining reasoning for the purpose of understanding it is a difficult task. For the human reasoner, the actual methods by which we reason are opaque, thus making it difficult to define reasoning in a way that our behavior can be immediately replicated in a computable system. While the process of reasoning may be opaque, its effects and inters are clearly observable. Reasoning systems take as input a sequence of events, each of whom may represent some data to be reasoned about. What is produced depends on the nature of the system to some degree; however, as a general statement any kind of output from a reasoning system whose goal is to interact with or model portions of the external world produces predictions as output. For a system whose goal is to decide its own future s, the predictions represent chosen s which are then implemented. event 2 event 1 Reasoning Engine event 0 prediction 0 prediction 1 prediction 2 Figure 3: A basic reasoning system. Predictions range from chosen s to pure predictions of future events. One possible form of computer introspection is for a reasoning system to process its own algorithms and explicitly modify them. These algorithms, once input, are treated as data just like any set of external data input to the system. This system must then understand that modifying this collection of data will in modifications to itself. McCarthy categorized this kind of introspection as easy introspection and explained how this kind of introspection cannot accomplish certain commonsense tasks (McCarthy 1995). McCarthy proposes that introspection can be introduced into problem solving in a rather simple way letting s depend on the state of the mind and not just on the state of the external world as revealed by observation. This kind of serious introspection introduces into a reasoning engine the ability to reason about its own state information, such as reasoning about the contents of its own body of knowledge, by directly querying itself for this information. This kind of introspection allows the formulation of statements such as I do not know X, where X is some piece of knowledge, and other general statements about one s own knowledge (McCarthy 1984). While obtaining this kind of state information is important, we do not feel it encompasses all introspective behavior. As an example, consider the behavior of a person who is predicting their own potential behavior in a situation. This person is drawing on a large body of remembered introspective information to form this prediction and using this information to model their own reasoning process. However, they are not directly querying their own cognitive process for a piece of information, nor are they using an understanding of their own algorithms to carry out this process. What is remembered is enough to consciously reconstruct our own behavior, despite potential differences between past remembered situations and the hypothetical situation. Thus, we must remember our own reasoning process in a way that can facilitate this prediction. Without an actual understanding of the algorithms which implement reasoning in a given system, all that can be known is the intentions and effects of that reasoning system - the thoughts and s taken by the system in the past. Our model of reasoning (Figure 3) suggests that this data would consist of the sequence of events event n merged with the set of predictions prediction n in temporal order. However, what is not present in this sequence is the considerations taken by the reasoning engine in forming the sequence of predictions. Thus, the reasoning engine also must produce a journal of events grounded only in the operation of the reasoning system itself, not the mechanics of the domain it operates in. A process which replays a recorded combined sequence of external and internal events by re-running the events recorded in this journal should obtain the same set of predictions originally created by the system. If this journal of reasoning processes were produced by a domain-specific reasoning engine (one that used domain knowledge not obtained by inter with the external world or from remembered prior events), then replaying this journal accomplishes the same s without any domain-specific reasoning provided the situations are identical. An algorithm which modifies prior recorded sequences to accommodate differences between the recorded environment and the current environment would then be accomplishing a reasoning task by direct application of memory without any domain knowledge. Though the events used and predictions generated all fall within a specific domain, the algorithm would be operating in a domain-blind fashion. The domain independence of an algorithm which operates according to these general principles allows this al-

gorithm the ability to reason about its own behavior, no matter which domain this behavior was obtained from thus avoiding the infinite staircase of separate introspective systems which occurs when general introspection is added to a domain-specific reasoning system. Representing Reasoning The development of a general formalism for reasoning relies strongly on the actual representation of the reasoning data (Griffiths & Bridge 1995). Using the affective model of reasoning defined above (Figure 3), one simple model that could be constructed is an ordered list composed of each event, followed by any journaled reasoning events reason n, including any predictions created by the engine as part of its process. The creation of a basic concrete data type to encapsulate these events involves the development of a data organization capable of representing these events. The most basic data type in any symbolic reasoning system is the symbol. Thus, the most basic data type available to us to represent a reasoning process is an ordered set (or list) of symbols. Because the purpose of the construction of this data type is to support introspective reasoning, it must be determined whether basic introspective tasks can be supported using this data type. In order to introspect about or remember a prior recored sequence, there must be some way of differentiating that remembered sequence from actual events which are now occurring. This differentiation is a fundamental component of introspective reasoning and is such a syntactic property of any concrete representation of reasoning. This type of differentiation or quoting can be introduced into this data format by means of tag symbols which differentiate quoted data or by positional specifiers which would only allow quoted data at certain positions in the list. Reasoning behavior in complex domains also depends on the ability to select portions of observed data for use in the reasoning process and ignore other portions. In a list data structure, the simplest form of selection is the creation of a sublist a list which contains two symbols from a given list and every symbol between them. However, it is not possible to combine these two basic reasoning tasks in this data format, as taking a sublist can remove the quoting present and cause symbols which would otherwise be treated as remembered data to be active symbols in the stream. When developing a concrete data structure to support reasoning, no one basic operation of reasoning should cause such an unintended effect on the syntactic meaning of the data. To rectify the situation, we could introduce a quoting level for each symbol, which would serve as an indication of how many times this sequence has been quoted. When quoting a sequence, the quoting level of each symbol in the sequence would be increased by the current level of the symbols where the quoting level takes place plus one. As an example of how this might work: a 0 b 0 c 0 begin-quote 0 a 1 end-quote 0 However, each element of the list is now non-atomic, as it is now composed of a symbol and an integer. Furthermore, integers are a data type which is non-symbolic. The ordered pairs in this data representation can be interpteted as single-paramater functions named by the first element of the ordered pair. In this interpretation, the predictions generated by a reasoning engine would be excecuted on some lower level of functions which govern the system s inter with its domain. These executed functions would produce responses, which would in turn be input to the system as new events. In fact, this interpretation allows the data format to encompass more complex data formats in a straightforward manner. A function which constructs a complex data type out of some number of symbols can be recreated as a function which Currys its paramaters and also represents the actual structure. As a simple example, the function three-tuple could turn its argument into a function expecting three paramaters, and the functions first, second, and third could retrieve these from an initialized 3-tuple. Other complex data formats such as graphs could be constructed through smaller functions creating vertices and edges. Though this method unambiguous quoting of other lists can also be accomplished. In order to quote an arbitrarily-sized data structure such as a list, the function which represents the list must Curry infinitely, always accepting new data paramaters. The list (a b) (c d) can then be quoted via these constructions in a manner similar to the following: (list s) (pair p 1 ) (p 1 a) (p 1 b) (pair p 2 ) (p 2 c) (p 2 d) (s p 1 ) (s p 2 ) This method of quoting now resolves the ambiguity introduced by the use of place values or single symbols to differentiate quoted elements in a list of symbols. Taking a sublist of this quoted list, (p 1 a) (p 1 b) does remove the meaning associated with the function p1 due to the removal of the definition of p 1 ; however, the elements a and b are now designated as being data elements or function paramaters. Unlike the methods of quoting introduced for lists of symbols, here sublists of quoted elements can no longer be confused for active elements in the system. Performing Basic Reasoning Basic reasoning depends heavily on the ability to form a prediction of an internal or external response to a sequence of events. These predictions, when composed of potential internal responses, are the most basic form of intended. Some intentions are consciously reflected upon before being acted; others are simply implemented and potentially remembered for later use. We will show here how s of the second type can

be modeled, and by using the introspective capabilities of this model s of the first type can be modeled as well. Our model of basic reasoning suggests that the reasoning process itself can be treated as data. What portions of this process should be recorded and what portions omitted? In order to serve as a useful model, what is recorded should be enough to re-apply the given reasoning process if the factors considered as input to the process are the same. In other words, if the sequence of events which are considered by the reasoning process during the formation of its prediction are the same, then the prediction itself can be re-used in the given situation. Factors which might alter the making process should be explicitly included in the data. Through re-application of recorded reasoning processes, a system can attempt to model the behavior shown in those reasoning processes. If all the factors which influenced the behavior are available, then the behavior of the system itself will be identical. When the factors available are not identical to any prior recorded sequence, then one of two strategies can be pursued. The system may attempt to form an incomplete map between the current situation and some prior situation, or the system may attempt to use higher-order reasoning processes to create a new strategy. If this system can use its basic reasoning procedures to model its own behavior, then through the process of reproducing its own behavior it can model a higher-order process which would attempt to modify that behavior. How can a system use the data recorded by itself to model its own behavior? This data is a representation in structured symbolic form of the behavior of an algorithmic process. To recreate this complex behavior it must first be understood how to recreate the behavior of a simple algorithmic system. Consider a basic process as recorded. The process has four components: the, the act of testing the, the of testing the, and some implemented as a of this. In the most basic case, this can be modeled using four ordered pairs of symbols: given Figure 4: A basic process Each of the four pairs represents a separate step of the process. The pair (given ) represents the input of data to the system. The pair ( ) represents a call to a lowlevel procedure represented as an opaque functional application. The pair ( ) represents the returned from the low-level procedure as a boolean true or affirmative The pair ( ) represents some carried out as a consequence of this. Creating a correspondence between a new given set of and this list allows us to predict the remainder of the list: given given Figure 5: Predicting the rest of the process Thus, the next step in the process is to implement the ing prediction by carrying out the process: given given Figure 6: Carrying out the process Because the of this low-level matched the in the given list, the ing from this process can likewise be implemented. However, what if the was not the same? It may be possible to find given given Figure 7: An which does not match another prior experience which more closely matches the current ; however, if such a list is not available, some form of adaptation is required. One simple form of adaptation is to ignore the differing and carry out the specified: Recreating the behavior of complex reasoning procedures can then be accomplished via two tasks. The amount of data available to a reasoning system which records its own s for further use is overwhelmingly large. Thus, some process must select data that is to be considered for the reasoning process. The second portion of this process of modeling reasoning is the formation of a mapping between the chosen data and the recent history of events. These processes are complex; however, through the development of a formal data model we hope to enable the development of algorithms which accomplish these tasks as well as provable bounds on their operation. A simple form of mapping can be used to recreate the behavior of complex algorithm procedures. One such no

given given left blank Figure 8: Ignoring the unmatched algorithm is the Euclidean Aglorithm for finding the greatest common divisor (GCD) of two numbers. This algorithm operates on the positive integers and uses two s in the determination of its answer. Thus, for every pair of input numbers these s must be represented in the symbolic format. In order to serve as an effective model for actually implementing the recreation of the algorithm, the data should also be capable of being interpreted as a series of functional applications which create the data used in future steps of the algorithm. The Euclidian Algorithm can be expressed in a simple pseudocode as the following: gcd(m, n) := if (m < n) swap(n, m) if (m MOD n == 0) n else gcd(m MOD n, n) To construct a symbolic representation of the s of this algorithm on a given case, we will walk through the behavior of this algorithm in that case. Here, the numbers chosen will be 9 and 15. The symbol here will represent an ordered pair containing these two numbers. Using the basic model of a procedure as given above, we can represent the first part of this algorithm s behavior on the given numbers: (gcd ) (less-than ) ( ) (swap ) Because 9 is less than 15, the algorithm swaps these numbers. The second step is to test the mod of the two numbers to see if it is zero: (mod ) ( 6) (zero 6) ( no) Here, 6 is merely a symbol representing the number six. Because the is negative, we must find the GCD of m MOD n and n. To do this, we must construct an ordered pair of these two numbers. This will be done by using a currying function as defined earlier. (make-pair q) (second-value ) ( 9) (q 9) (q 6) (gcd q) Now, the process begins over again with 9 and 6: (less-than q) ( no) Because the values to the recursive calls of the algorithm are input in the proper order, we do not have to swap the two numbers. (mod q) ( 3) (zero 3) ( no) (make-pair r) (second-value q) ( 6) (r 6) (r 3) (gcd r) We have one more recursive call to complete. (less-than ) ( no) (mod ) ( 0) (zero ) ( ) (gcd-answer 3) Because 6 MOD 3 is 0, we have found the GCD of the two numbers. The complete representation for this process is now as follows: (gcd ) (less-than ) ( ) (swap ) (mod ) ( 6) (zero 6) ( no) (make-pair q) (second-value ) ( 9) (q 9) (q 6) (gcd q) (less-than q) ( no) (mod q) ( 3) (zero 3) ( no) (make-pair r) (second-value q) ( 6) (r 6) (r 3) (gcd r) (less-than ) ( no) (mod ) ( 0) (zero ) ( ) (gcd-answer 3) This representation of the algorithmic process includes the responses chosen by the algorithm to each possible different response from each of the s made by the algorithm. Because of that property, this case serves as a template for future modeling of the behavior of the algorithm. Higher-Order Introspective Reasoning Introspective reasoning is the act of reasoning with selfknowledge. Self-knowledge is a broad concept and includes many factors such as the direct-query introspection of McCarthy which allows a system to directly obtain knowledge about its cognitive processes and layers via queries such as do I know X?. However, there is a more indirect form of self-knowledge which allows us to pose questions of the form what would I do given X?. This form of self-knowledge is not accomplished via an actual understanding of the algorithms which produce the behavior but instead based on some conscious recreation of the process. There are two ways in which this introspective process can be modeled. The first way a system could model its own behavior consciously is to quote data recorded from prior reasoning processes and map it to other quoted data describing a hypothetical situation. Thus, quoting the procedure given in Figure 4 would produce the following: (list s) (pair p 1 ) (p 1 given) (p 1 ) (pair p 2 ) (p 2 ) (p 2 ) (pair p 3 ) (p 3 ) (p 3 ) (pair p 4 ) (p 4 ) (p 4 ) This method allows the system to form a potential answer the what would I do given X question. however, it does not allow the system to change that answer on a dynamic basis. Furthermore, while this process uses the map portion of the reasoning process, it does not allow the modeling of the portion which chooses a case.

The other way a system could model its own behavior would be to actually record the processes used to select events and map them. This form of introspection now allows the system to model not just the s of forming a mapping but the methods by which it is accomplished without having an actual description of the algorithm used. Furthermore, by modifying the data used in this process the system could modify its own behavior. If such a reasoning system could not just predict its own s but modify them, then it could demonstrate a capacity for expansion. This expansion would be a true introspective expansion marked by the capability to model and modify the processes by which one reasons. More than simply reasoning about our own knowledge (McCarthy 1990), this kind of introspection allows the system to control its own behavior consciously by simulating its behavior and making appropriate modifications. We believe that future systems which include this ability can demonstrate this type of expansion and growth. Leake, D. B. 1995. Experience, introspection, and expertise: Learning to refine the case-based reasoning process. Journal of Experimental and Theoretical Artificial Intelligence. McCarthy, J. 1984. Some expert systems need common sense. In Proc. of a symposium on Computer culture: the scientific, intellectual, and social impact of the computer, 129 137. New York Academy of Sciences. McCarthy, J. 1990. Ascribing mental qualities to machines. Ablex Publishing Corporation. McCarthy, J. 1995. Making robots conscious of their mental states. In Working Notes of the AAAI Spring Symposium on Representing Mental States and Mechanisms. Conclusion The process of developing algorithms to support symbolic and introspective reasoning requires a great deal of thought into the nature of these domains. By specifying the a data type and an approach to these methods, it is our hope that the development of these algorithms can be enabled. Furthermore, this minimal approach allows for the formulation of provable statements on the bounds of these systems. Introspective reasoning requires a domainindependent approach to reasoning technique. The model presented here demonstrates one method of achieving this domain independence in a way which is designed to allow the development of future algorithms. Through clarifying the nature of introspective reasoning, we hope that future introspective systems can be developed which allow for complex self-knowledge and learning. References Berkowitz, E. G. 2001. Body based reasoning using a feeling-based lexicon, mental imagery and an objectoriented metaphor hierarchy. In Proceedings of the 14th Biennial Conference of the Canadian Society for Computational Studies of Intelligence, AI 2001, 47 56. David B. Leake, A. K., and Wilson, D. 1995. Learning to improve case adaptation by introspective reasoning and cbr. In Proceedings of the First International Conference on Case-Based Reasoning. Fox, S. 1995. Introspective Learning for Case-Based Planning. Ph.D. Dissertation, Indiana University. Griffiths, A. D., and Bridge, D. G. 1995. Formalising the knowledge content of case memory systems. In UK Workshop on Case-Based Reasoning, 32 41.