AI for Service Composition

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "AI for Service Composition"

Transcription

1 AI for Service Composition In the last years, there has been increasing interest in service composition. The key idea is that existing distributed services can be selected and combined into suitable workflows of tasks, in order to provide new functionalities or applications. Service composition has the potentiality to revolutionize the classical approaches to data integration and business process integration, reducing development time and effort. Standards and platforms based on service models and supporting service composition have been developed in different frameworks, including web services, grid services, and agent services. AI techniques have been used to support different key aspects of the management of service compositions, including tasks such as their generation, allocation of resources, execution, monitoring and repair. For instance, knowledge representation techniques have been exploited to provide suitable semantic annotations of services; planning has been applied to an automatic generation of the workflows composing the services; scheduling has been applied to resource allocation and workflow optimization; and agent techniques have been applied to support a dynamic adaptation of the workflows. However, many issues remain to be resolved. These include (1) forming precise, clean and general characterizations of service compositions, and identifying the most appropriate ways to formalize the critical steps in their life cycle; (2) determining suitable languages to represent service compositions in all their relevant aspects and finding ways of bridging the gap between service composition languages used in the industry and languages exploited in AI; (3) highlighting important challenges for AI to be effective in practical, industrial contexts, proposing techniques and tools able to address these challenges in realistic scenarios, and finding architectures for integrating such techniques in a robust, integrated environment. The 8 full papers and 4 short papers appearing in this proceedings address these and other relevant problems in service composition. This workshop has been organized under the MIUR-FIRB project RBNE0195K5 Knowledge Level Automated Software Engineering. It is the continuation of three successful workshops at ICAPS 2003, ICAPS 2004, and AAAI 2005, and aims at becoming a regular meeting place for researcher and practitioners working in the field of AI and in the area of service composition. Marco Pistore Jose Luis Ambite Jim Blythe Jana Koehler Sheila McIlraith Biplav Srivastava August 28, 2006 i

2 Organization Program Chair Marco Pistore, University of Trento, Italy Organizing Committee Jose Luis Ambite, USC Information Sciences Institute, USA Jim Blythe, USC Information Sciences Institute, USA Jana Koehler, IBM Research Laboratory, Switzerland Sheila McIlraith, University of Toronto, Canada Biplav Srivastava, IBM Research Laboratory, India Local Organization Annapaola Marconi, ITC-irst, Trento, Italy ii

3 Table of Contents A Logic For Decidable Reasoning About Services Y. Gu, M. Soutchanski A Service Selection Model to Improve Composition Reliability N. Kokash Abduction for Specifying and Verifying Web Service and Choreographies F. Chesani, P. Mello, M. Montali, M. Alberti, M. Gavanelli, E. Lamma, S. Storari An Immune System-Inspired Approach for Composite Web Services Reuse R. Bova, S. Hassas, S. Benbernou Designing Security Requirements Models through Planning V. Bryl, F. Massacci, J. Mylopoulos, N. Zannone Formal Development of Web Services A. Chirichiello, G. Salaun Implicit vs. Explicit Data-flow Requirements in Web Service Composition Goals A. Marconi, M. Pistore, P. Traverso Web Service Composition in a Temporal Action Logic L. Giordano, A. Martelli A Composition oriented proposal to describe functionalities of devices S. Tandabany, M. Rousset Automatic Web Service Composition: Service-tailored vs. Client-tailored Approaches D. Berardi, G. De Giacomo, M. Mecella, D. Calvanese Causal link matrix and AI planning: a model for Web service composition F. Lecue, A. Leger Using Quantified Boolean Logics to Verify Web Service Composition Requirements E. Giunchiglia, M. Narizzano, M. Pistore, M. Roveri, P. Traverso iii

4 iv

5 A Logic For Decidable Reasoning About Services Yilan Gu 1 and Mikhail Soutchanski 2 Abstract. We consider a modified version of the situation calculus built using a two-variable fragment of the first-order logic extended with counting quantifiers. We mention several additional groups of axioms that need to be introduced to capture taxonomic reasoning. We show that the regression operator in this framework can be defined similarly to regression in the Reiter s version of the situation calculus. Using this new regression operator, we show that the projection problem (that is the main reasoning task in the situation calculus) is decidable in the modified version. We mention possible applications of this result to formalization of Semantic Web services. 1 Introduction The Semantic Web community makes significant efforts toward integration of Semantic Web technology with the ongoing work on web services. These efforts include use of semantics in the discovery, composition, and other aspects of web services. Web service composition is related to the task of designing a suitable combination of available component services into a composite service to satisfy a client request when there is no single service that can satisfy this request [16]. This problem attracted significant attention of researchers both in academia and in industry. A major step in this direction is creation of ontologies for web services, in particular, OWL-S that models web services as atomic or complex actions with preconditions and effects. An emerging industry standard BPEL4WS (Business Process Execution Language for Web Services) provides the basis for manually specifying composite web services using a procedural language. However, in comparison to error-prone manual service compositions, (semi)automated service composition promises significant flexibility in dealing with available services and also accommodates naturally the dynamics and openness of service-oriented architectures. The problem of the automated composition of web services is often formulated in terms similar to a planning problem in AI: given a description of a client goal and a set of component services (that can be atomic or complex), find a composition of services that achieves the goal [20, 21, 26, 24]. Despite that several approaches to solving this problem have already been proposed, many issues remain to be resolved, e.g., how to give well-defined and general characterizations of service compositions, how to compute all effects and side-effects on the world of every action included in composite service, and other issues. Other reasoning problems, well-known in AI, that can be relevant to service composition and discovery are executability and projection problems. Executability problem requires determining whether preconditions of all actions included in a composite service can be satisfied given incomplete information about the world. Projection problem requires determining whether a certain goal condition is satisfied after the execution of all component services given an incomplete information about the current state. In this paper we would like to concentrate on the last problem because 1 Dept. of Computer Science, University of Toronto, Canada, 2 Department of Computer Science, Ryerson University, Canada, it is an important prerequisite for planning and execution monitoring tasks, and for simplicity we start with sequential compositions of the atomic actions (services) only (we mention complex actions in the last section). More specifically, following several previous approaches [20, 21, 5, 26, 16], we choose the situation calculus as an expressive formal language for specification of actions. However, we acknowledge openness of the world and represent incomplete information about an initial state of the world by assuming that it is characterized by a predicate logic theory in the general syntactic form. The situation calculus is a popular and well understood predicate logic language for reasoning about actions and their effects [25]. It serves as a foundation for the Process Specification Language (PSL) that axiomatizes a set of primitives adequate for describing the fundamental concepts of manufacturing processes (PSL has been accepted as an international standard) [13, 12]. It is used to provide a well-defined semantics for Web services and a foundation for a highlevel programming language Golog [5, 20, 21]. However, because the situation calculus is formulated in a general predicate logic, reasoning about effects of sequences of actions is undecidable (unless some restrictions are imposed on the theory that axiomatizes the initial state of the world). The first motivation for our paper is intention to overcome this difficulty. We propose to use a two-variable fragment FO 2 of the first-order logic (FOL) as a foundation for a modified situation calculus. Because the satisfiability problem in this fragment is known to be decidable (it is in NEXPTIME), we demonstrate that by reducing reasoning about effects of actions to reasoning in this fragment, one can guarantee decidability no matter what is the syntactic form of the theory representing the initial state of the world. The second motivation for our paper comes from description logics. Description Logics (DLs) [2] are a well-known family of knowledge representation formalisms, which play an important role in providing the formal foundations of several widely used Web ontology languages including OWL [15] in the area of the Semantic Web [3]. DLs may be viewed as syntactic fragments of FOL and offer considerable expressive power going far beyond propositional logic, while ensuring that reasoning is decidable [6]. DLs have been mostly used to describe static knowledge-base systems. Moreover, several research groups consider formalization of actions using DLs or extensions of DLs. Following the key idea of [8], that reasoning about complex actions can be carried in a fragment of the propositional situation calculus, De Giacomo et al. [9] give an epistemic extension of DLs to provide a framework for the representation of dynamic systems. However, the representation and reasoning about actions in this framework are strictly propositional, which reduces the representation power of this framework. In [4], Baader et al. provide another proposal for integrating description logics and action formalisms. They take as foundation the well known description logic ALCQIO (and its sub-languages) and show that the complexity of executability and projection problems coincides with the complexity of standard DL reasoning. However, actions (services) are represented in their paper meta-theoretically, not as first-order (FO) terms. This can potentially lead to some complications when specifications of other reasoning tasks (e.g., planning) will be considered because it 1

6 ~enrolled(x,y) Figure 1. enroll(x,y) drop(x,y) ~incoming(psn1) ~student(psn1) ~ y.enrolled(psn1,y) Figure 2. enrolled(x,y) ~student(x) paytuit(x,y) reset Examples of transition diagrams for simple services. admit(psn1);paytuit(psn1,5100);enroll(psn1,cs1) incoming(psn1) student(psn1) enrolled(psn1,cs1) A transition diagram for a composite web service. student(x) is not possible to quantify over actions in their framework. In our paper, we take a different approach and represent actions as FO terms, but achieve integration of taxonomic reasoning and reasoning about actions by restricting the syntax of the situation calculus. Our paper can be considered as a direct extension of the well-known result of Borgida [6] who proves that many expressive description logics can be translated to two-variable fragment FO 2 of FOL. However, to the best of our knowledge, nobody proposed this extension before. The main contribution of our paper to the area of service composition and discovery is the following. We show that by using services that are composed from atomic services with no more than two parameters and by using only those properties of the world which have no more than two parameters (to express a goal condition), one can guarantee that the executability and projection problems for these services can always be solved even if information about the current state of the world is incomplete. Our paper is structured as follows. In Section 3, we briefly review the Reiter s situation calculus. In Section 4 we review a few popular description logics. In the following section 5 we discuss details of our proposal: a modified situation calculus and an extension of regression (the main reasoning mechanism in the situation calculus). Finally, in Section 6 we provide a simple example and in Section 7 we discuss briefly other related approaches to reasoning about actions. 2 Motivations Consider online web services provided by an university. Imagine that a system automates the department administrators by doing student management work online, for instance, admitting new students, accepting payments of tuition fees and doing course enrollments for students, etc. Unlike previously proposed e-services (e.g.,the e- services described in [5] or in BPEL4WS) which allow only services without parameters, we use functional symbols to represent a class of services. For example, variables, say x and y, can be used to represent any objects; the service of enrolling any student x in any course y can be specified by using a functional symbol enroll(x, y); and, the service of admitting any student x can be represented as a functional symbol admit(x), etc. The composite web services can be considered as sequences of instantiated services. For example, a sequence admit(p 1); payt uit(p 1, 5100); enroll(p 1, CS1) represents the following composite web service for person P 1: admit her as a student, take the tuition fee $5100 and enroll her in a course CS1. The system properties are specified by using predicates with parameters. For example, the predicate enrolled(x, y) represents that a student x is enrolled in a course y. This property becomes true when service enroll(x, y) is performed and becomes false when service drop(x, y) is performed for a student x and a course y (see Figure 1). A composite web service corresponds to the composition of these instantiated transition diagrams (see Figure 2). When one describes the preconditions of the services, the effects of the services on the world, i.e., when one characterizes which properties of the world are true before and after the execution of the services, given incomplete information about the current state of the world, the use of FO language, such as the situation calculus [25], can provide more expressive power than propositional languages. For example, assume that a student is considered as a qualified full time student if the tuition fee she paid is more than 5000 dollars and she enrolls in at least four different courses in the school. Such property can be easily described using the FO logic, and checking whether or not such property can be satisfied after execution of certain sequence of web services is equivalent to solving a projection problem. Because FOL is compact way of representing information about states and transitions between states, we want to take advantage of the expressive power of the FO logic as much as possible to reason about web services. On the other hand, as we mentioned in the introduction, we want to avoid the undecidability of the entailment problem in the general FOL. Inspired by the decidability of reasoning in many DLs (which are sub-languages of a syntactic fragment of the FOL with the restriction on the number of variables), we restrict the number of variables to at most two in the specifications of the web services to ensure the decidability of the executability and projection problems.techniques At the same time, we can take the advantage of the expressive power of quantifiers to specify compactly realistic web services (such as mentioned above). Moreover, FOL with limited number of variables, in contrast to the propositional logic, still allows us to represent and reason about properties with infinite domains (such as weight and time, etc) or with large finite domains (such as money, person, etc) in a very compact way. Two examples are given in the last section to illustrate the expressive power and reasoning about the web services. 3 The Situation Calculus The situation calculus (SC) L sc is a FO language for axiomatizing dynamic systems. In recent years, it has been extended to include procedures, concurrency, time, stochastic actions, etc [25]. Nevertheless, all dialects of the SC L sc include three disjoint sorts: actions, situations and objects. Actions are FO terms consisting of an action function symbol and its arguments. Actions change the world. Situations are FO terms which denote possible world histories. A distinguished constant S 0 is used to denote the initial situation, and function do(a, s) denotes the situation that results from performing action a in situation s. Every situation corresponds uniquely to a sequence of actions. Moreover, notation s s means that either situation s is a subsequence of situation s or s = s. 3 Objects are FO terms other than actions and situations that depend on the domain of application. Fluents are relations or functions whose values may vary from one situation to the next. Normally, a fluent is denoted by a predicate or function symbol whose last argument has the sort situation. For example, F ( x, do([α 1,, α n], S 0) represents a relational fluent in the situation do(α n, do(, do(α 1, S 0) ) resulting from execution of ground action terms α 1,, α n in S 0. 4 The SC includes the distinguished predicate P oss(a, s) to characterize actions a that are possible to execute in s. For any SC formula φ and a term s of sort situation, we say φ is a formula uniform in s iff it does not mention the predicates P oss or, it does not quantify over variables of sort situation, it does not mention equality on 3 Reiter [25] uses the notation s s, but we use s s to avoid confusion with the inclusion relation that is commonly used in description logic literature. In this paper, we use to denote the inclusion relation between concepts or roles. 4 We do not consider functional fluents in this paper. 2

7 situations, and whenever it mentions a term of sort situation in the situation argument position of a fluent, then that term is s (see [25]). If φ(s) is a uniform formula and the situation argument is clear from the context, sometimes we suppress the situation argument and write this formula simply as φ, and also introduce a notation φ[s] to represent the SC formula obtained by restoring situation s back to all the fluents and/or P oss predicates (if any) in φ. It is obvious that φ[s] is uniform in s. A basic action theory (BAT) D in the SC is a set of axioms written in L sc with the following five classes of axioms to model actions and their effects [25]. Action precondition axioms D ap: For each action function A( x), there is an axiom of the form P oss(a( x), s) Π A( x, s). Π A( x, s) is a formula uniform in s with free variables among x and s, which characterizes the preconditions of action A. Successor state axioms D ss: For each relational fluent F ( x, s), there is an axiom of the form F ( x, do(a, s)) Φ F ( x, a, s), where Φ F ( x, a, s) is a formula uniform in s with free variables among x, a and s. The successor state axiom (SSA) for F ( x) completely characterizes the value of F ( x) in the next situation do(a, s) in terms of the current situation s. The syntactic form of Φ F ( x, a, s) is as follows: F ( x, do(a, s)) W m i=1 ( yi)(a = P osacti( t i) φ + i F ( x, s) W k j=1 ( zj)(a = NegActj( t j) φ j ( x, yi, s)) ( x, zj, s)), where for i = 1..m (j = 1..k, respectively), each t i ( t j, respectively) is vector of terms including variables among x and quantified new variables y i ( z j, respectively) if there are any, each φ + i ( x, yi, s) (φ j ( x, zj, s), respectively) is a SC formula uniform in s who has free variables among x and y i ( z j, respectively) if there are any, and each P osact( t i) (NegAct( t j), respectively) is an action term that makes F ( x, do(a, s)) true (false, respectively) if the condition φ + i ( x, yi, s) (φ j ( x, zj, s), respectively) is satisfied. Initial theory D S0 : It is a set of FO formulas whose only situation term is S 0. It specifies the values of all fluents in the initial state. It also describes all the facts that are not changeable by any actions in the domain. Unique name axioms for actions D una: Includes axioms specifying that two actions are different if their names are different, and identical actions have identical arguments. Fundamental axioms for situations Σ: The axioms for situations which characterize the basic properties of situations. These axioms are domain independent. They are included in the axiomatization of any dynamic systems in the SC (see [25] for details). Suppose that D = D una D S0 D ap D ss Σ is a BAT, α 1,, α n is a sequence of ground action terms, and G(s) is a uniform formula with one free variable s. One of the most important reasoning tasks in the SC is the projection problem, that is, to determine whether D = G(do([α 1,, α n], S 0)). Another basic reasoning task is the executability problem. Let executable(do([α 1,, α n], S 0)) be an abbreviation of the formula P oss(α 1, S 0) W n i=2 P oss(αi, do([α1,, αi 1], S0)). Then, the executability problem is to determine whether D = executable(do([α 1,, α n], S 0)). Planning and high-level program execution are two important settings where the executability and projection problems arise naturally. Regression is a central computational mechanism that forms the basis for automated solution to the executability and projection tasks in the SC [23, 25]. A recursive definition of the regression operator R on any regressable formula φ is given in [25]; we use notation R[φ] to denote the formula that results from eliminating P oss atoms in favor of their definitions as given by action precondition axioms and replacing fluent atoms about do(α, s) by logically equivalent expressions about s as given by SSAs of sort situation in W is starting from S 0 and has the syntactic form do([α 1,, α n], S 0) where each α i is of sort action; (2) for every atom of the form P oss(α, σ) in W, α has the syntactic form A(t 1,, t n) for some n-ary function symbol A of L sc; and (3) W does not quantify over situations, and does not mention the relation symbols or = between terms of situation sort. The formula G(do([α 1,, α n], S 0)) is a particularly simple example of a regressable formula because it is uniform in do([α 1,, α n], S 0)), but generally, regressable formulas can mention several different ground situation terms. Roughly speaking, the regression of a regressable formula φ through an action a is a formula φ that holds prior to a being performed iff φ holds after a. Both precondition and SSAs support regression in a natural way and are no longer needed when regression terminates. The regression theorem proved in [23] shows that one can reduce the evaluation of a regressable formula W to a FO theorem proving task in the initial theory together with unique names axioms for actions: D = W iff D S0 D una = R[W ]. This fact is the key result for our paper. It demonstrates that an executability or a projection task can be reduced to a theorem proving task that does not use precondition, successor state, and foundational axioms. This is one of the reasons why the SC provides a natural and easy way to representation and reasoning about dynamic systems. However, because D S0 is an arbitrary FO theory, this type of reasoning in the SC is undecidable. One of the common ways to overcome this difficulty is to introduce the closed world assumption that amounts to assuming that D S0 is a relational theory (i.e., it has no occurrences of the formulas having the syntactic form F 1( x 1, S 0) F 2( x 2, S 0) or xf (x, S 0), etc) and all statements that are not known to be true explicitly, are assumed to be false. In many application domains this assumption is unrealistic. Therefore, we consider a version of the SC formulated in FO 2, a syntactic fragment of the FO logic that is known to be decidable, or in C 2 an extension of FO 2 (see below), where the satisfiability problem is still decidable. 4 Description Logics and Two-variable First-order Logics In this section we review a few popular expressive description logics and related fragments of the FO logic. We start with logic ALCHQI. Let N C = {C 1, C 2,...} be a set of atomic concept names and N R = {R 1, R 2,...} be a set of atomic role names. A ALCHQI role is either some R N R or an inverse role R for R N R. A ALCHQI role hierarchy (RBox ) RH is a finite set of role inclusion axioms R 1 R 2, where R 1, R 2 are ALCHQI roles. For R N R, we define Inv(R) = R and Inv(R ) = R, and assume that R 1 R 2 RH implies Inv(R 1) Inv(R 2) RH. The set of ALCHQI concepts is the minimal set built inductively from N C and ALCHQI roles using the following rules: all A N C are concepts, and, if C, C 1, and C 2 are ALCHQI concepts, R is a simple role and n N, then also C, C 1 C 2, and ( n R.C) are ALCHQI concepts. We use also some abbreviations for concepts: def C 1 C 2 = ( C 1 C 2) R.C def = 1 R.C def C 1 C 2 = C 1 C 2 R.C def = <1 R. C ( n R.C) def = ( (n+1) R.C) def = A A for some A N C ( n R. C) def = ( n R.C) ( n R.C) def = Concepts that are not concept names are called complex. A literal 3

8 concept is a possibly negated concept name. A TBox T is a finite set of equality axioms C 1 C 2 (sometimes, general inclusion axioms of the form C 1 C 2 are also allowed, where C 1, C 2 are complex concepts). An equality with an atomic concept in the left-hand side is a concept definition. In the sequel, we always consider TBox axioms set T that is a terminology, a finite set of concept definition formulas with unique left-hand sides, i.e., no atomic concept occurs more than once as a left-hand side. We say that a defined concept name C 1 directly uses a concept name C 2 with respect to T if C 1 is defined by a concept definition axiom in T with C 2 occurring in the right-hand side of the axiom. Let uses be the transitive closure of directly uses, and a TBox axioms set T is acyclic if no concept name uses itself with respect to T. An ABox A is a finite set of axioms C(a), R(a, b), and (in)equalities a b and a b. The logic ALCQI is obtained by disallowing RBox. A more expressive logic ALCQI(,,,, id) is obtained from ALCQI by introducing identity role id (relating each individual with itself) and allowing complex role expressions: if R 1, R 2 are ALCQI(,,,, id) roles and C is a concept, then R 1 R 2, R 1 R 2, R 1, R 1 and R 1 C are ALCQI(,,,, id) roles too. 5 These complex roles can be used in TBox (in the right-hand sides of definitions). Subsequently, we call a role R primitive if it is either R N R or it is an inverse role R for R N R. Two-variable FO logic FO 2 is the τ x(a) def = A(x) for A N C τ x( ) def = x = x τ x( ) def = (x = x) τ x( C) def = τ x(c) τ y(c) def = τ x(c)[x/y, y/x] τ x(c 1 C 2) def = τ x(c 1) τ x(c 2) τ x( n R.C) def = n y.( τ x,y(r) τ y(c) ) τ x( R.C) def = y.( τ x,y(r) τ y(c) ) τ x,y(id) def = x = y τ x,y( R) def = τ x,y(r) τ x,y(r C) def = τ x,y(r) τ y(c) τ x,y(r ) def = τ y,x(r) τ x,y(r 1 R 2) def = τ x,y(r 1) τ x,y(r 2) τ x,y(r 1 R 2) def = τ x,y(r 1) τ x,y(r 2) τ x,y(r) def = R(x, y) for R N R τ y,x(r) def = R(y, x) for R N R fragment of ordinary FO logic (with equality), whose formulas only use no more than two variable symbols x and y (free or bound). Twovariable FO logic with counting C 2 extends FO 2 by allowing FO counting quantifiers m and m for all m 1. Borgida in [6] defines an expressive description logic B and shows that each sentence in the language B without transitive roles and role-composition operator can be translated to a sentence in C 2 with the same meaning, and vice versa, i.e., these two languages are equally expressive. A knowledge base KB is a triple (R, T, A). The semantics of KB is given by translating it into FO logic with counting C 2 by the operator τ (see the table above, in which {, } and x/y means replace x with y). Borgida s logic B includes all concept and role constructors in ALCQI(,,,, id) and, in addition, it includes a special purpose constructor product that allows to build the role C 1 C 2 from two concepts C 1 and C 2. This construct has a simple semantics τ x,y(c 1 C 2) def = τ x(c 1) τ y(c 2), and makes the translation from C 2 into B rather straightforward. Although constructor product is not a standard role constructor, we can use restriction constructor in addition with,, and inverse role to represent it. That is, for any concepts C 1 and C 2, 5 These standard roles constructors and their semantics can be found in [3]. C 1 C 2 = (R R) C2 ((R R) C1 ), where R can be any role name. Consequently, product can be eliminated. Therefore, the following statement is a direct consequence of the theorems proved in [6]. Theorem 1 The description logic ALCQI(,,,, id) and C 2 are equally expressive (i.e., each sentence in language ALCQI(,,,, id) can be translated to a sentence in C 2, and vice versa). In addition, translation in both directions leads to no more than linear increase of the size of the translated formula. This statement has an important consequence. Gradel et. al.[11] and Pacholski et al [22] show that satisfiability problem for C 2 is decidable. Hence, the satisfiability and/or subsumption problems of concepts w.r.t. an acyclic or empty TBox in description logic ALCQI(,,,, id) is also decidable. 6 In Section 5, we take advantage of this and use C 2 as a foundation for a modified SC. 5 Modeling Dynamic Systems in a Modified Situation Calculus In this section, we consider dynamic systems formulated in a minor modification of the language of the SC so that it can be considered as an extension to C 2 language (with situation argument for unary and binary fluents). The key idea is to consider a syntactic modification of the SC such that the executability and projection problems are guaranteed to be decidable as a consequence of the C 2 property of being decidable. 7 Moreover, since the modified SC has a very strong connections with description logics, which will be explained in detail below, we will denote this language as L DL sc. First of all, the three sorts in L DL sc (i.e., actions, situations and objects) are the same as those in L sc, except that they obey the following restrictions: (1) all terms of sort object are variables (x and y) or constants, i.e., functional symbols are not allowed; (2) all action functions include no more than two arguments. Each argument of any term of sort action is either a constant or an object variable (x or y); (3) variable symbol a of sort action and variable symbol s of sort situation are the only additional variable symbols being allowed in L DL sc in addition to variable symbols x and y. Second, any fluent in L DL sc is a predicate either with two or with three arguments including the one of sort situation. We call fluents with two arguments, one is of sort object and the other is of sort situation, (dynamic) concepts, and call fluents with three arguments, first two of sort object and the last of sort situation, (dynamic) roles. Intuitively, each (dynamic) concept in L DL sc, say F (x, s) with variables x and s only, can be considered as a changeable concept F in a dynamic system specified in L DL sc ; the truth value of F (x, s) could vary from one situation to another. Similarly, each (dynamic) role in L DL sc, say R(x, y, s) with variables x, y and s, can be considered as a changeable role R in a dynamic system specified in L DL sc ; the truth value of R(x, y, s) could vary from one situation to another. In L DL sc, (static) concepts (i.e., unary predicates with no situation argument) and (static) roles (i.e., binary predicates with no situation argument), if any, are considered as eternal facts and their truth values never change. If they are present, they represent unchangeable taxonomic 6 In [3] it is shown that the satisfiability problems of concepts and subsumption problems of concepts can be reduced to each other; moreover, if a TBox T is acyclic, the reasoning problems w.r.t. T can always be reduced to problems w.r.t. the empty TBox. 7 The reason that we call it a modified SC rather than a restricted SC is that we not only restrict the number of variables that can be mentioned in the SC during the formalizations of dynamic systems, but we also extend the SC with other features, such as introducing counting quantifiers and adding acyclic TBox axioms to basic action theories. 4

9 properties and unchangeable classes of an application domain. Moreover, each concept (static or dynamic) can be either primitive or defined. For each primitive dynamic concept, a SSA must be provided in the basic action theory formalized for the given system. Because defined dynamic concepts are expressed in terms of primitive concepts by axioms similar to TBox, SSAs for them are not provided. In addition, SSAs are provided for dynamic primitive roles. Third, apart from standard FO logical symbols, and, with the usual definition of a full set of connectives and quantifiers, L DL sc also includes counting quantifiers m and m for all m 1. The dynamic systems we are dealing with here satisfy the open world assumption (OWA): what is not stated explicitly is currently unknown rather than false. In this paper, the dynamic systems we are interested in can be formalized as a basic action theory (BAT) D using the following seven groups of axioms in L DL sc : D = Σ D ap D ss D T D R D una D S0. Five of them (Σ, D ap, D ss, D una, D S0 ) are similar to those groups in a BAT in L sc, and the other two (D T, D R) are introduced to axiomatize description logic related facts and properties (see below). However, because L DL sc allows only two object variables, all axioms must conform to the following additional requirements. Action precondition axioms D ap: For each action A in L DL sc, there is one axiom of the form P oss(a, s) Π A[s] (or P oss(a(x), s) Π A(x)[s], or P oss(a(x, y), s) Π A(x, y)[s], respectively), if A is an action constant (or unary, or binary action term, respectively), where Π A (or Π A(x), or Π A(x, y), respectively) is a C 2 formula with no free variables ( or with at most x, or with at most x, y as the only free variables, respectively). This set of axioms characterize the preconditions of all actions. Successor state axioms D ss: For each primitive dynamic concept F (x, s) in L DL sc, a SSA is specified for F (x, do(a, s)). According to the general syntactic form of the SSAs provided in [25], without loss of generality, we can assume that the axiom has the form F (x, do(a, s)) ψ F (x, a, s), (1) where the general structure of ψ F (x, a, s) is as follows. ψ F (x, a, s) ( W m 0 i=1 [ x][ y](a =A+ i ( x (i,0,+)) φ + i ( x (i,1,+))[s])) F (x, s) (( W m 1 j=1 [ x][ y](a =A j ( x (j,0, )) φ j ( x (j,1, ))[s]))), where each variable vector x (i,n,b) (or x (j,n,b) respectively) (i = 1..m 0, j = 1..m 1, n {0, 1}, b {+, }) represents a vector of object variables, which can be empty, x, y, x, y or y, x. Moreover, [ x] or [ y] represents that the quantifier included in [ ] is optional; and each φ + i ( x (i,1,+)), i = 1..m 0 (φ i ( x (j,1, )), j = 1..m 1, respectively), is a C 2 formula with variables (both free and quantified) among x and y. Similarly, a SSA for a dynamic primitive role R(x, y, s) is provided as a formula of the form R(x, y, do(a, s)) ψ R(x, y, a, s), (2) Moreover, without loss of generality, the general structure of ψ R(x, y, a, s) is as follows. ψ R(x, y, a, s) ( W m 2 i=1 [ x][ y](a =A+ i ( x (i,0,+)) φ + i ( x (i,1,+))[s])) R(x, y, s) (( W m 3 j=1 [ x][ y](a =A j ( x (j,0, )) φ j ( x (j,1, ))[s]))), where each variable vector x (i,n,b) (or x (j,n,b) respectively) (i = 1..m 2, j = 1..m 3, n {0, 1}, b {+, }) represents a vector of free variables, which can be either empty, x, y, x, y or y, x. Moreover, [ x] or [ y] represents that the quantifier included in [ ] is optional; and each φ + i ( x (i,1,+)), i = 1..m 2 (φ j ( x (j,1, )), j = 1..m 3, respectively), is a C 2 formula with variables (both free and quantified) among x and y. 8 8 Notice that when m 0 (or m 1, m 2, m 3, respectively) is equal to 0, the cor- Acyclic TBox axioms D T : Similar to the TBox axioms in DL, we may also introduce a group of axioms D T to define new concepts, which are later called TBox axioms. Any group of TBox axioms D T may include two sub-classes: static TBox D T,st and dynamic TBox D T,dyn. Every formula in static TBox is a concept definition formula of the form G(x) φ G(x), where G is a unary predicate symbol and φ G(x) is a C 2 formula in the domain with free variable x, and there is no dynamic concept or dynamic role in it. Every formula in dynamic TBox is a concept definition formula of the form G(x, s) φ G(x)[s], where φ G(x) is a C 2 formula with free variable x, and there is at least one dynamic concept or dynamic role in it. All the concepts appeared in the left-hand side of TBox axioms are called defined concepts. During reasoning, we use lazy unfolding technique (see [2]) to expand a given sentence whenever we regress defined dynamic concepts. In this paper, we require that the set of TBox axioms must be acyclic to ensure the lazy unfolding approach terminates in the finite number of steps (acyclicity in D T is defined exactly as it is defined for TBox ). RBox axioms D R: Similar to the idea of RBox in DL, we may also specify a group of axioms, called RBox axioms below, to support a role taxonomy. Each role inclusion axiom R 1 R 2, if any, where R 1 and R 2 are primitive roles (either static or dynamic) is represented as R 1(x, y)[s] R 2(x, y)[s]. If these axioms and included in the BAT D, then it is assumed that D is specified correctly in the sense that the meaning of any RBox axiom included in the theory is correctly compiled into SSAs. This means that one can prove by induction that D = s.r 1(x, y)[s] R 2(x, y)[s]. Although RBox axioms are not used by the regression operator, they are used for taxonomic reasoning in the initial theory. Initial theory D S0 : It is a finite set of C 2 sentences (assuming that we suppress the only situation term S 0 in all fluents). It specifies the incomplete information about the initial problem state and also describes all the facts that are not changeable over time in the domain of an application. In particular, it includes static TBox axioms D T,st as well as RBox axioms in the initial situation S 0 (if any). The remaining two classes ( Σ and D una) are the same as those in the usual SC. After giving the definition of what the BAT in L DL sc is, we turn our attention to the reasoning tasks. Given a formula W of L DL sc in the domain D, the definition of W being regressable (called L DL sc regressable below) is slightly different from the definition of W being regressable in L sc (see Section 3) by adding the following additional conditions: (4) any variable (free or bounded) in W is either x or y; (5) every term of sort situation in W is ground. Moreover, to avoid using new variables and assure defined dynamic concepts being handled, we modify the regression operator (which later is still denoted as R) for each L DL sc regressable formula using the following ideas: (1) whenever the operator meets a defined dynamic concept, it will replace the concept with the corresponding definition, i.e., with the right hand-side of the TBox axiom for this concept; (2) whenever the operator meets a atomic sentence whose the positions of variable x and y are different from the positions of the left hand-side of the axiom given in the basic action theory, we will switch all the appearances of x and y (both free and quantified) at the right hand-side of the axiom when replacing the atomic sentence with the right handside of the axiom in the basic actions theorem. The detailed definition can be found in [14]. We proved that using such regression operator on L DL sc regressable responding disjunctive subformula is equivalent to false. 5

10 formulas, regression terminates in a finite number of steps. Moreover, we also proved the following key property indicating that the projection problems and executability problems are decidable for any L DL sc regressable formulas in the modified situation calculus L DL sc. Theorem 2 (see [14] for details) Suppose W is a Lsc DL regressable formula with the background basic action theory D. Then, the problem whether D = W is decidable. 6 An Example In this section, we give an example to illustrate the basic ideas described above. Example 1 Consider some university that provides on the Web student administration and management services, such as admitting students, paying tuition fees, enrolling or dropping courses and entering grades. Although the number of object arguments in the predicates can be at most two, sometimes, we are still able to handle those features of the systems that require more than two arguments. For example, the grade z of a student x in a course y may be represented as a predicate grade(x, y, z) in the general FOL (i.e., with three object arguments). Because the number of distinct grades is finite and they can be easily enumerated as A, B, C or D, we can handle grade(x, y, z) by replacing it with a finite number of extra predicates, say gradea(x, y), gradeb(x, y), gradec(x, y) and graded(x, y) such that they all have two variables only. However, the restriction on the number of variables limits the expressive power of the language if more than two arguments vary over infinite domains (such as energy, weight, time, etc). Despite this limitation, we conjecture that many web services still can be represented with at most two variables either by introducing extra predicates (just like we did for the predicate grade) or by grounding some of the arguments if their domains are finite and relatively small. Intuitively, it seems that most of the dynamic systems can be specified by using properties and actions with small arities, hence the techniques for arity reductions mentioned above and below require no more than polynomial increase in the number of axioms. The high-level features of our example are specified as the following concepts and roles. Static primitive concepts: person(x) (x is a person); course(x) (x is a course provided by the university). Dynamic primitive concepts: incoming(x, s) (x is an incoming student in the situation s, it is true when x was admitted); student(x, s) (x is an eligible student in the situation s, it is true when an incoming student x pays the tuition fee). Dynamic defined concepts: eligf ull(x, s) (x is eligible to be a full-time student by paying more than 5000 dollars tuition fee); eligp art(x, s) (x is eligible to be a part-time student by paying no more than 5000 dollars tuition); qualf ull(x, s) (x is a qualified fulltime student if he or she pays full time tuition fee and takes at least 4 courses); qualp art(x, s) (x is a part-time student if he or she pays part-time tuition and takes 2 or 3 courses). Static role: prereq(x, y) (course x is a prerequisite of course y). Dynamic roles: tuitp aid(x, y, s) (x pays tuition fee y in the situation s); enrolled(x, y, s) (x is enrolled in course y in the situation s); completed(x, y, s) (x completes course y in the situation s); hadgrade(x, y, s) (x had a grade for course y in the situation s); gradea(x, y, s); gradeb(x, y, s); gradec(x, y, s); graded(x, y, s). Web services are specified as actions: reset (at the beginning of each academic year, the system is being reset so that students need to pay tuition fee again to become eligible); admit(x) (the university admits student x); payt uit(x, y) (x pays tuition fee with the amount of y); enroll(x, y) (x enrolls in course y); drop(x, y) (x drops course y); entera(x, y) (enter grade A for student x in course y); enterb(x, y); enterc(x, y); enterd(x, y). The basic action theory is as follows (most of the axioms are selfexplanatory). Precondition Axioms: P oss(reset, s) true, P oss(admit(x), s) person(x) incoming(x, s), P oss(payt uit(x, y), s) incoming(x, s) student(x, s), P oss(drop(x, y), s) enrolled(x, y, s) completed(x, y, s), P oss(entera(x, y), s) enrolled(x, y, s) completed(x, y, s), and similar to entera(x, y), the precondition for enterb(x, y) (enterc(x, y) and enterd(x, y) respectively) at any situation s is also enrolled(x, y, s). Moreover, in the traditional SC, the precondition for action enroll(x, y) would be equivalent to ( z)(prereq(z, y) completed(x, z, s) graded(x, z, s)) student(x) course(y). However, in the modified SC, we only allow at most two variables (including free or quantified) other than the situation variable s and action variable a. Fortunately, the number of the courses offered in a university is limited (finite and relatively small) and relatively stable over years (if we manage the students in a college-wise range or department-wise range, the number of courses may be even smaller). Therefore, we can specify the precondition for the action enroll(x, y) for each instance of y. That is, assume that the set of courses is {CS 1,, CS n}, the precondition axiom for each CS i (i = 1..n) is P oss(enroll(x, CS i), s) student(x) ( y)(prereq(y,cs i) completed(x, y, s) graded(x, y, s)). On the other hand, when we do this transformation, we can omit the statements course(x) for each course available at the university in the initial theory. Successor State Axioms: The SSAs for the fluents gradeb(x, y, s), gradec(x, y, s) and graded(x, y, s) are very similar to the one for fluent gradea(x, y, s) (therefore are not repeated here), which ensures that for each student and each course there is no more than one grade assigned. incoming(x, do(a, s)) a = admit(x) incoming(x, s), student(x, do(a, s)) ( y)(a = payt uit(x, y)) student(x) a reset, tuitp aid(x, y, do(a, s)) a = payt uit(x, y) tuitp aid(x, y, s) a reset, enrolled(x, y, do(a, s)) a = enroll(x, y) enrolled(x, y, s) (a = drop(x, y) a = entera(x, y) a = enterb(x, y) a = enterc(x, y) a = enterd(x, y)), completed(x, y, do(a, s)) a =entera(x,y) a =enterb(x, y) a =enterc(x, y) a =enterd(x, y) completed(x, y, s) a enroll(x, y), gradea(x, y, do(a, s)) a =entera(x, y) gradea(x, y, s) (a =enterb(x, y) a =enterc(x, y) a =enterd(x, y)), Acyclic TBox Axioms: (no static TBox axioms in this example) eligf ull(x, s) ( y)(tuitp aid(x, y, s) y > 5000), eligp art(x, s) ( y)(tuitp aid(x, y, s) y 5000), qualf ull(x, s) eligf ull(x, s) ( 4 y)enrolled(x, y, s), qualp art(x, s) eligp art(x, s) ( 2 y)enrolled(x, y, s) ( 3 enrolled(x, y, s)). An example of the initial theory D S0 could be the conjunctions of the following sentences: 6

11 person(p 1), person(p 2),, person(p m), ( x)incoming(x, S 0) x = P 2 x = P 3, prereq(cs 1, CS 4) prereq(cs 3, CS 4), ( x)x CS 4 ( y).prep eq(y, x), ( x) student(x, S 0). We may also introduce some RBox axioms as follows: gradea(x, y, s) hadgrade(x, y, s), gradeb(x, y, s) hadgrade(x, y, s), gradec(x, y, s) hadgrade(x, y, s), graded(x, y, s) hadgrade(x, y, s). The RBox axioms are not used in the regression steps of reasoning about executability problems and projection problems. However, they are useful for terminological reasonings when necessary. For instance, we may reason about x. s.(( 1 y)(hadgrade(x, y, s) course(y)) (( 1 y)(gradea(x, y, s) course(y)) ( 1 y)(gradeb(x, y, s) course(y)))). Since the truth value of such statement in fact has nothing to do with the situation argument, it is the same as the following formula represented in Description Logics. 1 hadgrade.course ( 1 gradea.course 1 gradeb.course). Finally, we give an example of regression of a L DL sc regressable formula. R[( x).qualf ull(x, do([admit(p 1), payt uit(p 1, 6000)], S 0))] = R[( x).eligf ull(x, do([admit(p 1), payt uit(p 1, 6000)], S 0)) ( 4 y)enrolled(x, y, do([admit(p 1), payt uit(p 1, 6000)], S 0))] = = ( x).( 4 y)enrolled(x, y, S 0) (( y)r[y > 5000 tuitp aid(x, y, do([admit(p 1), payt uit(p 1, 6000)], S 0))]) = = ( x).( 4 y)enrolled(x, y, S 0) (( y).tuitp aid(x, y, S 0) y > 5000 (x = P 1 y = 6000 y > 5000)) which is false given the above initial theory. Suppose we denote the above basic action theory as D. Given goal G, for example x.qualf ull(x), and a composite web service starting from the initial situation, for example do([admit(p 1), payt uit(p 1, 6000)], S 0) (we denote the corresponding resulting situation as S r), we can check if the goal is satisfied after the execution of this composite web service by solving the projection problem whether D = G[S r]. In our example, this corresponds to solving whether D = x.qualf ull(x, S r). We may also check if a given (ground) composite web service A 1; A 2; ; A n is possible to execute starting from the initial state by solving the executability problem whether D = executable(do([a 1, A 2,, A n], S 0)). For example, we can check if the composite web service admit(p 1); payt uit(p 1, 6000) is possible to be executed from the starting state by solving whether D = executable(s r). 7 Discussion and Future Work The major consequence of the results proved above for the problem of service composition is the following. If both atomic services and properties of the world that can be affected by these services have no more than two parameters, then we are guaranteed that even in the state of incomplete information about the world, one can always determine whether a sequentially composed service is executable and whether this composite service will achieve a desired effect. The previously proposed approaches made different assumptions: [20] as- regression and do efficient reasoning in D S0. It should be straightforward to modify existing implementations of the regression operator for our purposes, but it is less obvious which reasoner will work efficiently on practical problems. There are several different directions that we are going to explore. First, according to [6] and Theorem 1,there exists an efficient algorithm for translating C 2 formulas to ALCQI(,,,, id) formulas. Consequently, we can use any resolution-based description logic reasoners that can handle ALCQI(,,,, id) (e.g., MSPASS [17]). Alternatively, we can try to use appropriately adapted tableaux-based description logic reasoners, such as FaCT++, for (un)satisfiability checking in ALCQI(,,,, id). Second, we can try to avoid any translation from C 2 to ALCQI(,,,, id) and adapt resolution based automated theorem provers for our purposes [7]. The recent paper by Baader et al [4] proposes integration of description logics ALCQIO (and its sub-languages) with an action formalism for reasoning about Web services. This paper starts with a description logic and then defines services (actions) meta-theoretically: an atomic service is defined as the triple of sets of description logic formulas. To solve the executability and projection problems this paper introduces an approach similar to regression, and reduces this problem to description logic reasoning. The main aim is to show how executability of sequences of actions and solution of the executability and projection problems can be computed, and how complexity of these problems depend on the chosen description logic. In the full version of [4], there is a detailed embedding of the proposed framework into the syntactic fragment of the Reiter s SC. It is shown that solutions of their executability and projection problems correspond to solutions of these problems with respect to the Reiter s basic action theories in this fragment for appropriately translated formulas (see Theorem 12 in Section 2.4). To achieve this correspondence, one needs to eliminate TBox by unfolding (this operation can result potentially in exponential blow-up of the theory). Despite that our paper and [4] have common goals, our developments start differently and proceed in the different directions. We start from the syntactically restricted FO language (that is significantly more expressive than ALCQIO), use it to construct the modified SC (where sumes that the complete information is available about the world when effects of a composite service are computed, and [5] considers the propositional fragment of the SC. As we mentioned in Introduction, [20, 21] propose to use Golog for composition of Semantic Web services. Because our primitive actions correspond to elementary services, it is desirable to define Golog in our modified SC too. It is surprisingly straightforward to define almost all Golog operators starting from our C 2 based SC. The only restriction in comparison with the original Golog [18, 25] is that we cannot define the operator (πx)δ(x), non-deterministic choice of an action argument, because L DL sc regressable formulas cannot have occurrences of non-ground action terms in situation terms. In the original Golog this is allowed, because the regression operator is defined for a larger class of regressable formulas. However, everything else from the original Golog specifications remain in force, no modifications are required. In addition to providing a well-defined semantics for Web services, our approach also guarantees that evaluation of tests in Golog programs is decidable (with respect to arbitrary theory D S0 ) that is missing in other approaches (unless one can make the closed world assumption or impose another restriction to regain decidability). The most important direction for future research is an efficient implementation of a decision procedure for solving the executability and projection problems. This procedure should handle the modified L DL sc 7

12 actions are terms), define basic action theories in this language and show that by augmenting (appropriately modified) regression with lazy unfolding one can reduce the executability and projection problems to the satisfiability problem in C 2 that is decidable. Furthermore, C 2 formulas can be translated to ALCQI(,,,, id), if desired. Because our regression operator unfolds fluents on demand and uses only relevant part of the (potentially huge) TBox, we avoid potential computational problems that may occur if the TBox were eliminated in advance. The advantage of [4] is that all reasoning is reduced to reasoning in description logics (and, consequently, can be efficiently implemented especially for less expressive fragments of ALCQIO). Our advantages are two-fold: the convenience of representing actions as terms, and the expressive power of L DL sc. Because C 2 and ALCQI(,,,, id) are equally expressive, there are some (situation suppressed) formulas in our SC that cannot be expressed in ALCQIO (that does not allow complex roles). An interesting paper [19] aims to achieve computational tractability of solving projection and progression problems by following an alternative direction to the approach chosen here. The theory of the initial state is assumed to be in the so-called proper form and the query used in the projection problem is expected to be in a certain normal form. In addition, [19] considers a general SC and impose no restriction on arity of fluents. Because of these significant differences in our approaches, it is not possible to compare them. There are several other proposals to capture the dynamics of the world in the framework of description logics and/or its slight extensions. Instead of dealing with actions and the changes caused by actions, some of the approaches turned to extensions of description logic with temporal logics to capture the changes of the world over time [1, 2], and some others combined planning techniques with description logics to reason about tasks, plans and goals and exploit descriptions of actions, plans, and goals during plan generation, plan recognition, or plan evaluation [10]. Both [1] and [10] review several other related papers. In [5], Berardi et al. specify all the actions of e-services as constants, all the fluents of the system have only situation argument, and translate the basic action theory under such assumption into description logic framework. It has a limited expressive power without using arguments of objects for actions and/or fluents: this may cause a blow-up of the knowledge base. Acknowledgments Thanks to the Natural Sciences and Engineering Research Council of Canada (NSERC) and to the Department of Computer Science of the University of Toronto for providing partial financial support for this research. REFERENCES [1] Alessandro Artale and Enrico Franconi. A survey of temporal extensions of description logics. Annals of Mathematics and Artificial Intelligence, 30(1-4), [2] Franz Baader, Diego Calvanese, Deborah McGuinness, Daniele Nardi, and Peter F. Patel-Schneider, editors. The Description Logic Handbook: Theory, Implementation, and Applications. Cambridge Un. Press, [3] Franz Baader, Ian Horrocks, and Ulrike Sattler. Description logics as ontology languages for the semantic web. In Dieter Hutter and Werner Stephan, editors, Mechanizing Mathematical Reasoning, Essays in Honor of Jörg H. Siekmann on the Occasion of His 60th Birthday, Lecture Notes in Computer Science, vol. 2605, pages Springer, [4] Franz Baader, Carsten Lutz, Maja Miliĉić, Ulrike Sattler, and Frank Wolter. Integrating description logics and action formalisms: First results. In Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI-05), pages , Pittsburgh, PA, USA, July extended version is available as LTCS-Report from [5] Daniela Berardi, Diego Calvanese, Giuseppe De Giacomo, Maurizio Lenzerini, and Massimo Mecella. e-service composition by description logics based reasoning. In Diego Calvanese, Giuseppe de Giacomo, and Enrico Franconi, editors, Proceedings of the 2003 International Workshop in Description Logics (DL-2003), Rome, Italy, [6] Alexander Borgida. On the relative expressiveness of description logics and predicate logics. Artificial Intelligence, 82(1-2): , [7] Hans de Nivelle and Ian Pratt-Hartmann. A resolution-based decision procedure for the two-variable fragment with equality. In A. Leitsch R. Goré and T. Nipkow, editors, IJCAR 01: Proceedings of the First International Joint Conference on Automated Reasoning, pages , London, UK, Springer-Verlag, LNAI, V [8] Giuseppe De Giacomo. Decidability of Class-Based Knowledge Representation Formalisms. Dipartimento di Informatica e Sistemistica Universita di Roma La Sapienza, Roma, Italy, [9] Giuseppe De Giacomo, Luca Iocchi, Daniele Nardi, and Riccardo Rosati. A theory and implementation of cognitive mobile robots. Journal of Logic and Computation, 9(5): , [10] Yolanda Gil. Description logics and planning. AI Magazine, 26(2):73 84, [11] Erich Grädel, Martin Otto, and Eric Rosen. Two-variable logic with counting is decidable. In Proceedings of the 12th Annual IEEE Symposium on Logic in Computer Science (LICS 97), pages , Warsaw, Poland, [12] Michael Grüninger. Ontology of the process specification language. In Steffen Staab and Rudi Studer, editors, Handbook on Ontologies, pages Springer, [13] Michael Grüninger and Christopher Menzel. The process specification language (PSL): Theory and applications. AI Magazine, 24(3):63 74, [14] Yilan Gu and Mikhail Soutchanski. The two-variable situtation calculus. In Proc. of the Third European Starting AI Researcher Symposium (STAIRS 06), to appear, Riva Del Garda, Italy, IOS Press. yilan/publications/papers/stairs06.pdf. [15] Ian Horrocks, Peter Patel-Schneider, and Frank van Harmelen. From SHIQ and RDF to OWL: The making of a web ontology language. Journal of Web Semantics, 1(1):7 26, [16] Richard Hull and Jianwen Su. Tools for composite web services: a short overview. SIGMOD Record, 34(2):86 95, [17] Ullrich Hustadt and Renate A. Schmidt. Issues of decidability for description logics in the framework of resolution. In R. Caferra and G. Salzer, editors, Automated Deduction, pages Springer- Verlag, LNAI, V. 1761, [18] Hector Levesque, Ray Reiter, Yves Lespérance, Fangzhen Lin, and Richard Scherl. GOLOG: A logic programming language for dynamic domains. Journal of Logic Programming, 31:59 84, [19] Yongmei Liu and Hector J. Levesque. Tractable reasoning with incomplete first-order knowledge in dynamic systems with context-dependent actions. In Proc. IJCAI-05, Edinburgh, Scotland, August [20] Sheila McIlraith and Tran Son. Adapting Golog for composition of semantic web services. In D. Fensel, F. Giunchiglia, D. McGuinness, and M.-A. Williams, editors, Proceedings of the Eighth International Conference on Knowledge Representation and Reasoning (KR2002), pages , Toulouse, France, April Morgan Kaufmann. [21] Srini Narayanan and Sheila McIlraith. Analysis and simulation of web services. Computer Networks, 42: , [22] Leszek Pacholski, Wiesław Szwast, and Lidia Tendera. Complexity of two-variable logic with counting. In Proceedings of the 12th Annual IEEE Symposium on Logic in Computer Science (LICS-97), pages , Warsaw, Poland, A journal version: SIAM Journal on Computing, v 29(4), 1999, p [23] Fiora Pirri and Ray Reiter. Some contributions to the metatheory of the situation calculus. Journal of the ACM, 46(3): , [24] Marco Pistore, AnnaPaola Marconi, Piergiorgio Bertoli, and Paolo Traverso. Automated composition of web services by planning at the knowledge level. In Leslie Pack Kaelbling and Alessandro Saffiotti, editors, Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI05), pages , Edinburgh, Scotland, UK, July 30-August [25] Raymond Reiter. Knowledge in Action: Logical Foundations for Describing and Implementing Dynamical Systems. The MIT Press, [26] Evren Sirin, Bijan Parsia, Dan Wu, James Hendler, and Dana Nau. HTN planning for web service composition using SHOP2. Journal of Web Semantics, 1(4): , October

13 A Service Selection Model to Improve Composition Reliability Natallia Kokash 1 Abstract. One of the most promising advantages of web service technology is the possibility of creating added-value services by combining existing ones. A key step for composing and executing services lies in the selection of the individual services to use. Much attention has been devoted to appropriate selection of service functionalities, but also the non-functional properties of the services play a key role. Due to ever changing business environment, service users are not guaranteed from unexpected service faults. Service composition can compensate the deficiencies of constituent components aggregating a redundant number of them for individual tasks. This paper proposes a service selection strategy targeting at minimizing the impact of atomic service failure on the quality of service of compositions. 1 Introduction Service-Oriented Architecture (SOA) is an upcoming organizational model aiming at simplifying large-scale business operations by consumption of ready-to-use services. The most prominent realization of SOA is currently in the area of web services. Web services are loosely-coupled, platform-independent, self-describing software components that can be published, located and invoked via the web infrastructure using a stack of standards such as SOAP, WSDL and UDDI [12]. Composition of web services is probably the most interesting challenge spawned by this paradigm. Many efforts have been devoted to development of automatic or semi-automatic service composition mechanisms [1] [10] [16] [15]. We are striving for runtime composition, when a composer automatically searches, binds and executes available web services to satisfy some user request. Such a scenario is hardly feasible for a large spectrum of applications. Automatic service compositions are error prone. State-of-the-art techniques are not mature enough to guarantee a common semantic of the involved operations. Testing, adaptation, verification and validation processes are required. However, statically composed services, able to accomplish some generalized request classes (e.g., user trip planning), can presume manifold alternative components for each subgoal and practice dynamic switching between them. A set of services to satisfy a particular user request is selected depending on conditions like Quality of Service (QoS) parameters, service provider policy or user preferences. Several approaches for runtime selection of component services have been developed [7] [17] [18]. They tend to consider multiple quality factors and search for a solution that optimizes weighted composition of their average values under user constraints. In this paper, we propose a novel service selection strategy. Our approach 1 Department of Information and Communication Technology, University of Trento, Via Sommarive, 14, Trento, Italy, differs from the existing works in two aspects. First, it is well-known that good average statistics do not prevent from unexpected service faults. We study the problem of service selection assuming their probable failures. The solution is represented by several compositions able to satisfy given constraints on quality parameters and to reduce fault recovery expenses. Second, we use a single quality measure, that, though, takes into consideration the correlation between service reliability and other parameters. The paper is structured as follows. Section 2 discusses related work. In section 3, quality and web service composition models are outlined. Section 4 introduces a compound quality measure that characterizes reliability of redundant service compositions. Web service selection mechanism we propose is presented in Section 5. Finally, conclusions and future work are sketched in Section 6. 2 Related Work In this section, we cover related work on QoS of atomic web services and web service compositions. 2.1 Quality of Atomic Web Services Ran [13] describes basic non-functional QoS parameters. In Table 1, we list most common numeric factors that will be used below. Service QoS Throughput Capacity Latency Response time Availability Reliability Reputation Execution cost Table 1. QoS factors of web services [13] Description The number of requests served in a given time period. A limit of concurrent requests for guaranteed performance. The round-trip time between client request and service response. The time taken by a service to process its sequence of activities. The probability that a service is available. Stability of a service functionality, i.e., ability of a service to perform its functions under stated conditions The average rate of the service reported by clients. The amount of money for a single service execution. Level Agreement (SLA) defines the agreed level of performance for a particular service between a service provider and a service user. The Web Service Level Agreement (WSLA) project [6] is targeted at defining and monitoring SLAs for web services. SLA parameters can be measured with different metrics, including composite ones like maximum response time or average availability. Composite metrics are specified byfunctions which are executed during the predefined time 9

14 intervals, specified byschedules. Sherchan et al. [14] report the relevance of recent service performance measurements. Service quality parameters depend on manifold factors which should be taken into account at the early stages of development. Menasce et al. [11] provide a methodology for planning service capacity. Knowledge about the number of potential users, frequency of service invocations and their time distributions is essential for the analysis. An accurate capacity planning can be quite problematic because of factor uncertainty or limited budget. As a consequence, a signific ant number of troublesome services could appear. 2.2 Quality of Web Service Compositions Service compositions that embed low-quality services inherit all their drawbacks. This poses a big challenge for the software developers building new systems on the basis of available components. Cardoso et al. [4] describe the model that allows to predict quality of service for workflows based on atomic service QoS attributes. One can compensate composition deficienc y if many services with compatible functionality exist. Several approaches have been proposed for quality-aware service selection. Zeng et al. [18] consider the service selection task as a global optimization problem. Linear programming is applied to find the solution that represent the service composition optimizing the target function. The latter is defined as a linear combination of fi ve parameters: availability, successful execution rate, response time, execution cost and reputation. If the global characteristics (e.g., the total amount of money to accomplish the user goal), are not restricted, the optimal solution can be found by modified Dijkstra s algorithm searching on the graph of available compositions [8]. Martin-Diaz et al. [7] propose a constraint programming solution for procurement of web services whose demands and offers are temporal-aware. In [3] the problem is modelled as a mixed integer linear program where both local and global constraints are specified. Yu and Lin [17] modelled the service selection as a complex multi-choice multi-dimension 0-1 knapsack problem. Practice of offering different quality levels by services was taken into consideration. The above solutions depend strongly on the user weights assigned to each parameter. However, it is not trivial for a user to establish them in right way. An example in Table 2 demonstrates limitations of the algorithm in [18]. Let w 1 = 0.6 and w 2 = 0.4 reflectthe user scores for response time and availability, correspondingly. After scaling and weighting phases service s 1 will be chosen. However, the services have a small difference in response time and a huge diversity in availability rate. As a fast remedy, the intuition about absolute QoS values should be involved, e.g., from 0 to 100% for availability and from 0 to timeout ms for response time. The principal drawback of the approach is that the cross-impact of different quality parameters is not considered. Table 2. QoS-aware WS selection [18] Response time (ms) Availability (%) Result Original Scaled Original Scaled w = (0.6, 0.4) s s Despite the efforts aimed at insuring web service reliability service composition failures are almost inevitable. Nevertheless, they can be gently treated and do not lead to the composition breakdown. Two basic approaches for error recovery exist, namely backward and forward. The first one assumes the presence of redundant data allowing to analyze the detected fault and put the system into a correct state. The second one returns the system into a previous fault-free state without requiring detailed knowledge of the fault. Workflow systems rely intensely on backward error recovery if the resources are under the control of a single domain. Forward recovery is extensively used to handle errors in composite web services. Chafleet. al [5] propose a mechanism for fault propagation and recovery in decentralized service orchestrations. Decentralized architecture results in additional complexity requiring fault propagation between partitions executed independently. 3 Web Service Composition Model As opposed to the discussed service selection approaches we do not consider multiple QoS factors. Service reputation is excluded due to its subjective nature. High reputation of elementary services does not imply high reputation of their compositions due to potential problems with input data and effect/precondition satisfaction that cannot be fully verified at the planning phase. We do not consider the nature of web service faults and rely solely on probability of service success. It can be defined as p(s) = N suc(s)/n total (s), where N suc is the number of successful service responses and N total is the total number of observed invocations. A service invocation is considered to be successful if the user goal is satisfied or we can proceed along with an execution of a composite service, i.e., (1) a constituent service was available, (2) the successful response message was received within an established timeout, (3) no errors were detected automatically during the output and effects checking, (4) preconditions of a subsequent service were satisfied. Along with the probability of success we often use the probability of failure p(s) = 1 p(s). The other relevant parameters are response time q time(s) and execution cost q cost(s) of service s. A composite web service can be defined in terms of the standard BPEL (stands for Business Process Execution Language) [2] or other composition languages [9]. Services are composed by the following three operators C ::= (s 1; s 2) (s 1 s 2) (s 1 + s 2), where (s 1; s 2) denotes the sequential, (s 1 s 2) the parallel and (s 1 + s 2) the choice composition of services s 1 and s 2. If an error happens in one of the parallel services, we suppose it will be correctly forwarded to the end of partition [5]. After that we should decide whether the whole parallel composition is failed or only the erroneous branch has to be recovered. For the sake of simplicity we will not consider parallel compositions here. Since we are mostly interested in the non-functional qualities, parallel compositions can be approximately reduced to sequential ones as follows: for service c = (s 1 s 2) its execution cost will be response time probability of success q cost(c) = q cost(s 1) + q cost(s 2), q time(c) = max(q time(s 1), q time(s 2)), p(c) = p(s 1)p(s 2) 10

15 Figure 1. and probability of failure Composition graph 4 Fault-tolerant Service Compositions 4.1 Fault Recovery Existing service composition selection models do not take into account unpredictable service faults. The problem cannot be fully resolved by discarding the failed service. The time for finding a new solution from scratch increases latency for the requests arrived in the system. We propose to choose several configurations with good qualities at the selection stage. If a failure occurs in the chosen configuration, another configuration can be switched into. p(c) = p(s 1) + p(s 2) p(s 1)p(s 2). Service composition can be represented as Directed Acyclic Graph (DAG) G = (S, T ) with two distinguished vertices (start and end points). Nodes denote states T = {t j j = 1,..., n} and edges are labelled to represent available web services S = {s i i = 1,..., m}. Generally, multiple edges between two nodes are permitted. The processing of a user request begins from the start state t 0. In the end state t the user goal is accomplished. These two states are welldefined for a given class of problems. It is assumed that all web services have deterministic behavior. A graph G = (T, S) is called a composition graph (see Figure 1). Note that branching in the composition graph corresponds to choice operator. Further, tail[s i] will denote a tail and head[s i] a head of an edge s i. Let also d in[t j] and d out[t j] refer to input and output degree of a state t j, correspondingly. Figure 3. Execution plan In Figure 3, a possible execution plan is shown. A tuple t j, c i with an incoming edge a k means that configuration c i is started from state t j after event a k. In this example a 1 means any failure of composition c 1, a 2 denotes a failure of service s 11 and a 3 refers to a failure of services s 4 or s 7. A set of events and recovery actions can be established automatically based on the analysis of the dependency between configurations. In more complicated scenarios the events like input/output is not correct/complete, preconditions/postconditions are not satisfied, message is lost, exception is raised by a service, connection error, timeout is expired, SLA is violated, etc., can require different reactions. At this point we can start analyzing the error types in order to choose the optimal system behavior. Figure 2. Configuration tree A composition with choice operators can be separated into several sequential compositions, called configur ations, that correspond to paths between start and end states in a composition graph. Two configurations are independent if they do not have common states except start and end ones, and dependent otherwise. All possible configurations can be shown on theconfigur ation tree(see Figure 2). The start state of a composition graph corresponds to the start state of the associated configuration tree, and nodes with the same labels have the same output degree. Definition 1 Node t i of a composition graph G is a return state if either it is the start state or its outdegree d out(t i) > 1. The above statement defines the states in which a composite service can be returned to recover a failure of its components. For the composition graph in Figure 1 return points are {t 0, t 2, t 5}. A node t i is a return state of a configuration tree iff it is such in the corresponding composition graph. 4.2 Failure Risk Let c = (s 1; s 2;...; s k ) be a sequential composition with the only return point head[s 1]. If a service s i fails a new configuration can be started from state head[s 1]. The results of services {s 1, s 2,..., s i} will not be used whereas their response time and execution cost increase the total expenses to satisfy a user request. We refer to these expenses as to the loss function of a service s i failure. There are no reasons to impose penalties on services {s 1,..., s i 1} since they are not responsible for errors of service s i. We propose a service selection strategy based on the analysis of failure risks and targeted at decreasing the expected loss in such a scenario. Failure risk is a characteristic considering probability that some fault will occur and the resulting impact of this fault on the composite service. For an atomic service s i it equals r(s i) = p(s i)q(s i). Service s 1 is considered to be better than service s 2 if it has a smaller failure risk r(s 1) < r(s 2). Within the sequential composition c = (s 1;...; s k ), k > 1, the risk of a service s i failure is measured as r(s 1;...; s i) = p(s 1;...; s i 1; s i)q(s 1;...; s i) 11

16 where i 1 p(s 1;...; s i 1; s i) = p(s j)p(s i) j=1 is the probability of composition failure while service s i is being executed, and i q(s 1;...; s i) = q(s j) j=1 is the loss function of this failure. Since for a sequential composition all services should be invoked to accomplish a task, we exploit a total risk value which for a k-service long chain equals R(c) = R(s 1;...; s k ) = k r(s 1;...; s i). (1) Let c = (s s l ), l > 1, be a choice composition such that a service s i is invoked if the alternatives {s 1,..., s i 1} failed. The failure risk of an i-service long choice is defined as where i=1 r(s s i) = p(s 1;...; s i)q(s 1;...; s i), p(s 1;...; s i) = i p(s j). The choice composition fails if all invoked services fail. Obviously, the better candidates should be tried first. The question then is how many services to consider. We propose to include a new candidate if it does not increase the total failure risk for the choice composition, i.e. R(c) = R(s s l ) = min r(s s i). (2) i=1,l j=1 Finally, the failure risk of a composition c = (c 1; c 2), where c 1 = (s 1;...; s k ) is a sequential composition and c 2 = (s s l) is a choice one, can be computed as R(c) = R(c 1; c 2) = R(c 1) + p(c 1)p(c 2) ( q(c 1) + q(c 2) ). Above, q = {q time, q cost} refers either to response time or to execution cost. If the user preferences are given, it can incorporate both of them: q(s) = f(q time, q cost). Assuming that f is a linear combination, we get q = w 1q time + w 2q cost w 1 + w 2 = 1, 0 w 1, w 2 1. For simplicity we supposed that the money for the invocation of a failed service s are not payed back and the time to detect the error does not differ significantly from the usual service response time. If this is not the case, the corresponding corrections can be introduced, i.e., the loss function of a service s will be q(s) = {q ttd (s), q cost q penalty (s)}, where q ttd stands for error time-to-detect and q penalty is a penalty payed by the provider of a failed service. Failure risk is a compound measure considering probability of constituent service failures, their response time and/or execution cost along with the structure of a configuration tree. Intuitively, configurations with frequent return points are more preferable. However, the composition selection will depend on the balance between all mentioned parameters. We do not strictly require that the loss function should be linear. So, we can measure the response time of a composite web service with a certain probability as follows: Let P 1(t t 1) and P 2(t t 2) be the probabilities that services s 1 and s 2 respond within time t 1 and t 2, respectfully. Let P (t t 0) be the probability that a composite service s = (s 1; s 2) will respond within time t 0. Letting p 1(t) and p 2(t) be the corresponding probability density functions, p(t) can be calculated as a convolution p(t) = p 1(t τ)p 2(τ)dτ and P (t t 0) can be obtained by t 0 taking corresponding distribution P (t t 0) = p(t)dt. 5 Web Service Selection Methodology 5.1 Objectives It is reasonable to assume that an SLA with the end user is established in such a way that available service configurations can satisfy the constraints on response time and execution cost provided the normal conditions, i.e., that the services are not overloaded and no failures befall. However, the unexpected faults of component services lead to resource loss and may cause the violation of negotiated parameters. If there is reserve of the resources (the maximum budget for a task is not reached and there is time left before task execution deadline), a user task can be completed by another configuration. Our objective is to choose an execution plan (see Figure 3) that maximizes composition reliability, which is actually defined by the probability to accomplish a user request within established time and cost boundaries. The problem of increasing service reliability differs from the search of a single configuration with optimal quality parameters. For a single-objective function the latter one can be formalized as selection of a path (s 1;...; s k ) between the start and end states that maximizes the following target function: f(c) = p(c)(q max q(c)) = p(s 1;...; s k )(q max q(s 1;...; s k )) = k k = p(s i)(q max q(s i)), i=1 where q max defines the resource limit, taken from an SLA (or chosen big enough to guarantee the positive value of f(c)). Such a configuration can be found in time O(m), where m is a number of available web services, by searching of a path in a configu ration graph optimizing the formula above. However, practical problems rarely have only one objective. The simplest multi-objective problems focus on two criteria. This subclass of problems is known as dual criteria optimization. The basic approach is to take the less important parameter as objective function provided that the most important criterion meets some requirements. We identify two basic dimensions, namely, response time and execution cost. Although it is hard to predict which of them is more important we suppose that for a provider of composite web services focus should be put on response time. Usually, the internal structure of services is hidden from the end-user, so (s)he expects to pay the fix ed price for a single service execution (provided the same quality level). At the same time, service delays can be indemnified by penalties. On conditions that response time constraint is satisfied, a provider can optimize its own expenses. i=1 5.2 Failure Risk Evaluation Algorithm In this section we present our risk evaluation model for composite web services. 0 12

17 To simplify the explanation of the proposed algorithm we will use the notion of graph contraction. The contraction of an edge (v i, v j) of a directed graph is the directed graph obtained by replacing two nodes v i and v j with a single node v such that v is a head of the edges incoming to the original vertices and a tail of the edges outgoing from them. The contraction of a graph is a process of transforming the graph by applying the operation of edge contraction. A sequential composition (s 1;...; s k ), k > 0, can be seen as an elementary service with the failure risk obtained by formula (1) (see Figure 4). Figure 4. Contraction operations in a composition graph To show this graphically we can contract the path (s 1;...; s k ), k > 0, in the configuration tree to a single edges. After replacing all the paths we will get a reduced tree, where each node is either a return point or the end state, still unambiguously representing all possible configurations (see Figure 5). Such a tree is called acontracted configur ation tree. (2), and this information is transmitted to the precedent edges. Then, we repeat sequential composition replacement again, and so on, terminating if no more composite services are found in graph G. Algorithm 1. Failure Risk Evaluation (FRE) 1 G G, G = (T, S ) 2 while S 1 do 3 Sequential(G ) 4 Choice(G ) Sequential(G ) 1 for all s i d out[head[s i ]] = 0 do 2 if (d out[tail[s i ]] = 1) and (tail[s i ] t 0 ) then 3 for all s j head[s j ] = tail[s i ] do 4 stack[s j ] stack[s i ] s j 5 S = S \s i 6 else //tail[s i ] is a return point 7 s sequential composition of services stack[s i ] 8 remember R(s) as a weight of the edge s i in graph G 9 S S \s i s Choice(G ) 1 for all t i (d out[t i ] > 1) ( t k adj[t i ], d out[t k ] = 0) do 2 s choice composition of services {s j } = {(t i, t k ), t k } 3 for all s j head[s j ] = t i do 4 stack[s j ] s 5 S S \{s i } Proposition 1 Given a composition graph G = (T, S), the algorithm FRE measures the failure risks of the composite services defined by the subtrees of return points in polynomial time from number of available web services. A proof follows from the next observations: Figure 5. Contracted configuration tree Any choice composition (s s l ), l > 0, can be seen as an elementary service with the failure risk obtained by formula (2). However, we can calculate failure risks of composite services with alternative components only if the failure risks of the latter ones are known, which, in their turn, can be composite as well. The above observations define our failure risk evaluation algorithm (see Algorithm 1). Given a composition graph, it computes the failure risks of composite services defined by the subtrees of any return point. Given these values, a weighted contracted configuration tree can be formed where edges with smaller weights represent the most promising directions. For each composite service s i let stack[s i] be a structure where QoS parameters of constituent services are accumulated. Starting from the end state we gradually contract edge chains and partitions assigning failure risks to the corresponding services as has been discussed in Section 4.2. The end state t is the only point satisfying the condition d out[t] = 0 at the initial step of the algorithm. Since a composition graph is acyclic, after removing of an edge s i we again will have the states without outgoing edges. When we reach a return point, the chain in the stack of s i is contracted and its failure risk is measured by (1). Provided that no more sequences can be processed, the algorithm switches into choice compositions. Each choice composition is replaced by a service with the failure risk computed by Sequential(G ): The information about each service s i, d out[tail[s i]] = 1, is pushed into d in[tail[s i]] stacks and can be processed in time O(1) later. So, for any state tail[s i], a time O(d in[tail[s i]]) is required and this state will not be considered again by this function. Summing up by all states we get the process time of all sequential compositions O( n din[tail[si]]) = O(m). i=1 Choice(G ): For each state t i time md out[t i] ln(d out[t i]) is required to calculate the failure risk and d in[t i] to transmit this information for the precedent services. After that its outgoing edges are deleted and it will not be considered again. Hence, the processing of all choice compositions can be accomplished in time O ( n (din[ti] + mdout[ti] ln(dout[ti])) = O(m 2 ln(m)). i=1 A composition configuration can be chosen by a simple greedy heuristic that selects a service with the least failure risk in any return state. In case of a service failure, gradual change is preferable to sudden, large-scale switching into other configuration since we maximally reuse the results of already invoked services, that is, a composite service should go back to the nearest return point and complete the job by attaching a new configuration. The alternative choices with poor quality (i.e., ones that increase the total failure risk) are excluded from the calculation of failure risk of choice composition. However, such services still can be invoked to recover a failure if there are no better alternatives and the loss of rollback to the next return point is significant. 6 Concluding Remarks and Future Work Risk analysis is an important process accompanying the development of any significant software system. Being business-oriented applications, constructed from uncontrolled components and used via 13

18 the Internet, web service compositions imply a long list of potential risks, varying from security threats to economic unprofitableness. In such conditions self-adaptivity and redundancy are desirable composition qualities traded against development cost and performance efficienc y. We proposed a model for selecting web services that aims at increasing the reliability of service compositions. We introduced the notion of failure risk for web services and proposed a composition failure risk evaluation methodology. Further, we brieflydescribed the selection algorithm based on local analysis of promising configurations. Several important issues have been left out of the scope of this paper. Careful study of parallel service compositions and dependability between different configu rations is needed. Comprehensive experiments should be carried out in order to characterize the effectiveness of our model. Here we pursued the goal of failure risk minimization provided that a request should be executed within some constraints on time and cost, but without intention to minimize these expenses. The problem of finding an execution plan that maximizes the probability to fulfill the task and spend minimal resources can be tackled in the same manner. In some cases we can distinguish permanent faults and temporal faults (e.g., intermittent server crashes). The latter are characterized by time to repair. It can be reasonable to repeat the invocation of the same service instead of switching into an alternative configuration. We intend to deeper investigate the optimization problems related to usage of web service compositions. As a future work we are planning to look into the perspectives of using advanced scheduling policies. Real-life conditions such as limited service capacity, failures, execution deadlines, provider vs. client interests, etc., transform development of reliable composite services into a challenging task. Static selection models can be useful if the chosen services require significant adaptation efforts and involvement of alternative ones is not feasible or expensive. In dynamic scenarios the end-user would prefer a fast service despite of its low average availability if it is available at the moment of invocation, or a slower service provided that the faster one is overloaded. Acknowledgments We are grateful to Marco Aiello for his useful comments on the paper. REFERENCES [1] Aggarwal, R., et al.: Constraint-driven Web Service Composition in METEOR-S, IEEE Conference on Service Computing, [2] Andrews, T., Curbera, T. et al.: BusinessProcess Execution Language for Web Services,2003, ftp://www6.software.ibm.com/software/developer/library/ws-bpel.pdf. [3] Ardagna, D., Pernici, B.: Global and Local QoS Constraints Guarantee in Web Service Selection, IEEE International Conference on Web Services, 2005, pp [4] Cardoso, J., Sheth, A., Miller, J., Arnold, J., Kochut, K.: Quality of service for workflows and web service processes, Journal of Web Semantics, Vol. 1, No. 3, 2004, pp [5] Chafl,G., Chandra, S., Kankar, P., Mann, V.: Handling Faults in Decentralized Orchestration of Composite Web Services, International Conference on Service-Oriented Computing, 2005, pp [6] Dan, A., Davis, D., Kearney, R., et al.: Web services on demand: WSLA-driven automated management, IBM Systems Journal, Vol. 43, No. 1, 2004, pp [7] Martin-Diaz, O., Ruize-Cortes, A., Duran, A., Muller, C.: An Approach to Temporal-Aware Procurement of Web Services, International Conference on Service-Oriented Computing, 2005, pp [8] Gu, X., Chang, R.: OoS-Assured Service Composition in managed Service Overlay Networks, IEEE International Conference on Distributed Computing Systems, [9] Kokash, N., D Andrea, V.: ServiceOriented Computing and Coordination Models, Proceedings of Challenges in Collaborative Engineering Workshop, 2005, pp [10] Lazovik, A., Aiello, M., Papazoglou, M.: Planning and monitoring the execution of web service requests, International Conference on Service-Oriented Computing, 2003, pp [11] Menasce, D. and Almeida, V.: Capacity Planning for Web services, Prentice Hall, Upper Saddle River, NJ, [12] Papazoglou, M. P., Georgakopoulos, D.: Service-orientedcomputing, Communications of the ACM, Vol. 46, No. 10, 2003, pp [13] Ran, Sh.: A Model for Web Services Discovery With QoS, ACM SIGecom Exchanges, Vol. 4, No. 1, 2003, pp [14] Sherchan, W., Krishnaswamy, Sh., Loke, S-W.: Relevant Past Performance for Selecting Web Services, International Conference on Quality Software, 2005, pp [15] Srivastava, B., Koehler, J., Web Service Composition - Current Solutions and Open Problems, Proceedings of ICAPS Workshop on Planning for Web Services, [16] Sirin, E., Hendler, J., Parsia, B.: Semi-automaticComposition of Web Services Using Semantic Descriptions, In Web Services: Modeling, Architecture and Infrastructure workshop in ICEIS, [17] Yu, T., Lin, K.J.: Service Selection Algorithms for Composing Complex Services with Multiple QoS Constraints, International Conference on Service-Oriented Computing, 2005, pp [18] Zeng, L., Benatallah, B., et al.: QoS-aware Middleware for Web Services Composition, IEEE Transactions on Software Engineering, Vol. 30, No. 5, 2004, pp

19 Abduction for Specifying and Verifying Web Service and Choreographies Federico Chesani, Paola Mello, Marco Montali 1 Marco Alberti, Marco Gavanelli, Evelina Lamma, Sergio Storari 2 Abstract. Global choreographies have been recently proposed as a way for specifying the overall behaviour of a system composed of heterogeneous web services. In this work, we propose an abductive framework based on computational logic to specify both choreographies and web service interface behaviours. One of the main motivations for using computational logic is that its operational counterpart provides a proof-theoretic support able to verify, from different viewpoints, the conformance of services designed in a cooperative and incremental manner. We show how it is possible to specify both the choreography and the web service interface behaviours (restricted to the conversation aspects) using a uniform formalism based on abductive logic programs. Then, we provide a definition of conformance, and show how, by using an abductive proof procedure, it is possible to automatically verify the conformance of a given web service w.r.t. a certain choreography. 1 INTRODUCTION The recent and fast growth of network infrastructures, such as the Internet, is spawning a new range of scenarios and emerging paradigms for distributed computing. One of them is Service Oriented Computing, which finds its origin in object-oriented and component computing. Web service technology is an important instance of Service Oriented Computing aiming at facilitating the integration of new applications, avoiding difficulties due to different platforms, languages, etc. Service composition - building complex services from simpler ones - requires methodologies in order to verify, for instance, whether the communicative behaviour (Behavioural Interface) of an existing service conforms to the interaction rules (a choreography) and, thus, whether the service in question would be able to play a given role in that choreography. From the Web Service technology viewpoint a fundamental requirements [4] is to provide tools to validate conformance to choreography descriptions to ensure interoperability, to enable static or dynamic verification of choreographies and to ensure that the local behaviour of participants conforms to the choreography specification. To this purpose, it is crucial to formalize both choreographies and (behaviour interfaces of) Web Services by using formal languages that provides these validation capabilities. In the Multi-Agent Systems (MAS) community, on the other hand, there is a wide literature about checking compliance of agents to social rules, both at run-time and at design-time. Baldoni et al. [3] first recognised the similarities in the two areas. In particular, societies 1 DEIS, University of Bologna 2 ENDIF, University of Ferrara of agents and compositions of Web Services, although defined and designed in different contexts, share the following features: They describe a collaboration between a collection of parties in order to achieve a common goal. They should capture the interactions in which the participants engage to achieve this goal. They should capture the dependencies between interactions (control-flow dependencies, message correlations, time constraints, etc.) by expressing global interaction protocols. They should capture interactions from a global perspective and, therefore, they should not describe any internal action that does not directly result in an external, visible effect. Protocols should be expressed, if possible, in a formal language. In this work, we investigate the feasibility of using an abductive framework grounded on computational logic, defined in the context of MAS for Global Computing, for modelling both choreographies and behavioural interfaces of Web Services. We build upon the SCIFF framework, developed in the area of MAS for specification and verification of interaction in open agent societies. In SCIFF, a social specification is meant to specify the observable agent behaviour, rather than the agents internals or policies. SCIFF has been used for expressing social semantics of agent communication languages [1], and a number of interaction protocols. We propose a framework, called A l LoWS (Abductive Logic Webservice Specification), where both Web Services and choreographies are modelled as abductive logic programs. In A l LoWS, we exploit the SCIFF proof procedure to formally prove conformance of the participating Web Services to a choreography specification, by identifying sets of abduced hypotheses which satisfy both the specifications and the formal definition of conformance. In this way, we statically analyse the behaviour of choreographies and individual Web Services, and prove a priori conformance. 2 A l LoWS: AN ABDUCTIVE FRAMEWORK The framework we propose, A l LoWS (Abductive Logic Web-service Specification), is based on Abductive Logic Programming (ALP [14]), and in particular it is derived from the SCIFF framework, developed in the SOCS project for open societies of agents [1]. As we demonstrate in the following, the language is expressive enough to specify typical choreographies and web services. The operational semantics of the language is used for the verification of conformance of a web service to a choreography. In A l LoWS, the behaviour of the web services is represented by means of events. Since we focus on the interactions between web 15

20 services, events always represent exchanged messages. The syntax is the same used in [3]: a message is described by the term m x (Sender, Receiver, Content), where m x is the type of message, and the arguments retain their intuitive meaning. A l LoWShas two types of events: happened events (denoted by the functor H) and expected events (denoted by E, and also called expectations). Both are abducible, and represent hypotheses on, respectively, which events have happened and which are expected to happen: H(m x (...), T x ), expresses the fact that a message m x has been exchanged between two peers at time T x, whereas E(m x(...), T x) says that the message m x is expected to be exchanged at time T x. In this paper, the time parameter may be omitted when non essential. Choreographies and web services are specified by imposing a relation between happened and expected events by means of an abductive logic program, as explained in Sections 2.1 and Specification of a Choreography A choreography describes, from a global viewpoint, what are the patterns of communication, or interactions, allowed in a system that adopts such choreography [4]. The choreography specification defines the messages that are allowed: it is not possible to exchange other messages than the ones specified, and in a defined order. The choreography also enlists the participants, the roles the participants can play, and other knowledge about the web service interaction. We specify a choreography by means of an abductive logic program. An abductive logic program [14] is a triple P, A, IC, where P is a logic program, A is a set of distinguished predicates named abducibles, and IC is a set of integrity constraints. Reasoning in abductive logic programming is usually goal-directed (being G a goal), and corresponds to find a set of abduced hypotheses built from predicates in A such that P = G and P = IC. Suitable proof procedures (e.g., Kakas-Mancarella [15], IFF [12], SLDNFA [7], etc.) have been proposed to compute such set, in accordance with the chosen declarative semantics. A choreography specification P chor is defined by the tuple: P chor KB chor, E chor, IC chor KB chor is the Knowledge Base, E chor is the set of abducible predicates, and IC chor is the set of Choreography Integrity Constraints. KB chor specifies declaratively pieces of knowledge of the choreography, such as role descriptions and the list of participants. It is expressedintheformofclauses (alogic program) thatmay alsocontain in their body expectations about the behaviour of participants. The abducible predicates are those that can be hypothesized in our framework, namely happened events (H) and expectations (E). Choreography Integrity Constraints are forward rules, of the form body head, whose body can contain literals and (happened and expected) events, and whose head is a conjunction of expectations. The syntax of IC chor is the same defined for the SOCS Integrity Constraints [1], but in A l LoWS we do not use negative expectations and negation ( ), since they are not needed. Consider the simple choreography in Fig. 1, where a multi-party interaction is shown as a UML activity diagram. The interaction is initiated by a Customer that asks the Supplier for a certain good. The Supplier queries the W arehouse for the available quantity of that good, and forwards the answer to the Customer. Then, if the Figure 1. UML activity-chart of a simple choreography quantity Q is higher than zero, then Supplier notifies the bill to Customer, and a shipment order is sent to W arehouse. Otherwise the interaction terminates, since there is no good available for buying. The specification in terms of IC chor is given by Eq. (1-5): in particular, Eq. 4 shows how to express alternative behaviours whose selection is dependent by the content of previous messages, and it shows how to express concurrency of actions. It specifies that, after communicating the available quantity Qof Good to Customer, and if Q > 0, then Supplier should notify the bill to Customer, and a shipment order to W arehouse. In A l LoWS, the condition Q > 0 is intended as a constraint à la CLP (Constraint Logic Programming, [13]); CLP constraints can also be used to impose an order between messages, by imposing relations on the time instants the events are expected. H(request(Customer, Supplier, Good), T r ) E(query(Supplier, W arehouse, Good), T q ) T q > T r. H(query(Supplier, W arehouse, Good), T q ) E(answer(W arehouse, Supplier, Good, Q), T a) T a > T q. H(answer(W arehouse, Supplier, Good, Q), T a) E(provide(Supplier, Customer, Q), T p) T p > T a. H(provide(Supplier, Customer, Q), T p) Q > 0 E(send(Supplier, Customer, Bill), T b ) E(shipOrder(Supplier, W arehouse, Good), T s ) T b > T p T s > T p. H(send(Supplier, Customer, Bill), T b ) E(pay(Customer, Supplier, Bill), T p ) T p > T b. 2.2 Specification of a Web Service Interface Behaviour Uniformly to the specification of a choreography, we define the interface behaviour of a web service as an Abductive Logic Program. We restrict our specification to the communicative aspects of the web service. A Web Service Interface Behaviour Specification P ws is an Abductive Logic Program, represented with the triple P ws KB ws, E ws, IC ws (1) (2) (3) (4) (5) 16

21 KB ws is the Knowledge Base of the Web Service, E ws is the set of abducible predicates, and IC ws is the set of Integrity Constraints of the web service. KB ws and IC ws are completely analogous to their counterparts in the choreography specification, except that they represent an individual, rather than global, perspective: they represent, respectively, the declarative knowledge and the policies of the web service. E ws is the set of abducible predicates: as for the choreographies, it contains both expectations and happened events. The expectations in E ws can be divided into two significant subsets: expectations about messages where ws is the sender (of the form E ws(m x(ws, A, Content))), i.e., actions that ws intends to do; expectations about messages uttered by other participants to ws (of the form E ws(m x(a, ws, Content)), with A ws), which can be intended as the messages that ws is able to understand. Conformance can be based on the observation of the relations between the allowed interactions, and the behaviours expected both by the choreography and the web service. Based on the current interaction status, the choreography will prescribe a coherent set of possible evolutions. Each evolution contains the expectations of the choreography on the behaviour of the participants: some expectations are about the web service ws under testing, while others will concern the behaviour of the other participants. The set of expectations of the choreography will be indicated in the following with EXP chor. Analogously, each web service participating in the choreography has expectations on the behaviour of the other peers: after a question it will expect a reply, or after sending a bill, it will expect a payment. Also, the web service might have expectations about its own behaviour, expressing the intention to perform some actions. The set of expectations of a web service ws is denoted by EXP ws. We will often write E A (...) as a shortcut for E(...) EXP A. The compliance of a web service is based on the compliance of single interactions: a web service is compliant if all the interactions it can possibly generate belong to the choreography specification. Note however that it is not necessary to check a full interaction history: as soon as a violation is detected, the subsequent (possibly, infinite) evolutions are of no interest. We focus on the interactions of total agreement: those histories (i.e., sequences of happened events) for which all actions expected both by the choreography and by the the web service under observation are indeed considered as happened. (a) Figure 2. (b) Example of behavioural interfaces Definition 1 Given the abductive program KB U, E U, IC U, KB U KB chor KB ws E U E chor E ws IC U IC chor IC ws an interaction is a pair (HAP, EXP) where HAP is a set of atoms built with predicate H, and EXP is a set of atoms build with predicates in E U such that In Fig. 2(a) the communicative part of the interface behaviour of a web service is represented as UML activity diagrams. The corresponding translation in terms of IC ws is given by Eq. (6). H(query(Supplier, W arehouse, Good), T q ) E(answer(W arehouse, Supplier, Good, Q), T a ) T a > T q. 3 CONFORMANCE Intuitively, conformance is the capability of a web service to interact with the other peers according to a choreography. A web service ws will be conformant to a choreography if it complies to the choreography in all possible interactions. Although intuitive, such definition is not directly usable for automatic verification, as it does not specify which actions the web service will be able to perform, and how the other peers will behave. We assume the web service will act according to its own specifications, P ws, and that the other peers will behave as defined in the choreography specification P chor. For instance, in Fig. 2(a) and 2(b) two possible interface behaviours are shown: while the latter shows a web service conformant for the role of Customer in the choreography, the former represents a web service that would play the role of W arehouse, but it is not conformant to that role w.r.t. the specified choreography. In particular, the non-conformance is given by the fact that the warehouse web service does not accept the possible incoming ship order message. (6) KB U HAP EXP = G (7) KB U HAP EXP = IC U (8) Total Agreement Interactions are the minimal interactions (HAP py, EXP py ) for which KB U HAP py EXP py = IC tot where IC tot contains the following implications E chor (m x(s, R, M)), E ws(m x(s, R, M)) H(m x (S, R, M)) E chor (m x (A, B, M)) A ws B ws H(m x (A, B, M)) (9) (10) G is the goal of the derivation; it depends on the property of the web service we want to prove. Conditions (7) and (8) specify the abductive EXP. Note that for each HAP there could be more EXP that satisfy conditions (7) and (8). Condition (9) imposes that total agreement interactions contain in the history only messages that were expected by both the choreography and the web service. Condition (10) tackles multi-party protocols, and requires that any dialogue between other peers (apart from ws) does evolve as specified in the choreography. We can now provide a definition of conformance based on the total agreement interactions, selecting those that indeed respect the choreography and web services specifications. 17

22 Definition 2 ( -Conformance) A web service specification P ws is existentially conformant to a choreography specification P chor if there exists at least one total agreement interaction (HAP py, EXP py ) such that the following implications hold: E ws (m x (ws, A, M)) H(m x (ws, A, M)) (11) E chor (m x(s, A, M)) H(m x(s, A, M)) (12) Definition 3 ( -Conformance) A web service specification P ws is universally conformant to a choreography specification P chor if for all pairs (HAP py, EXP py ) of Def. 1 the conditions (11-12) hold. Condition (11) requires that if ws had an expectation about sending a message, the corresponding event should have happened. If this event is not present in HAP py, from Def. 1 we can infer that the event was unexpected from the choreography viewpoint. This situation corresponds to the case where ws is determined to utter a message that is not envisaged by the choreography: in this case there is not conformance between ws and the choreography. Condition (12) requires that every message expected by the choreography indeed happens. Together with conditions (9) and (10), this means that all expectations involving the web service under testing are matched by a corresponding expectation of the web service. This condition is false if there exists an expectation of the choreography without a corresponding expectation from the web service, i.e., either if the ws may receive (during interactions approved by the choreography) a message it is not able to understand, or if the web service failed to to utter a message that the choreography expects. For symmetry reasons, one would expect also a rule about the satisfaction of a web service s expectation on the behaviour of other peers. Such a rule is not necessary, because in this case either the interaction is considered successfully finished by the choreography (and in this case we consider conformant the web service), or ws will be expected to perform some further interaction, that will remain unfulfilled from the choreography viewpoint (and the non conformance will be detected by one of the rules (11-12)). When Definitions 2 and 3 hold together for a web service ws, we say that ws is conformant. Let us see how conformance can be tested. 4 CONFORMANCE TEST A l LoWS uses the SCIFF proof procedure for proving conformance. SCIFF is an abductive proof procedure developed for on-the-fly conformancecheckinginmass [2]. Ittakesasinputan initial historyof happened events HAP i and generates the expected behaviour of the agents (EXP) while accepting on-line further events. SCIFF consists of a sequence of transitions; when it reaches quiescence in nonfailure nodes, it has a successful derivation, indicated with KB HAP i HAPf EXP where G is the goal of the society, HAP f is the final history at the quiescence, and EXP is the set of generated expectations. In order to perform a priori conformance checking, we need to generate sets of events. We therefore developed a generative version of SCIFF, which is able to produce the happened events, instead of just taking them as input. We declared H as an abducible predicate and added the integrity constraints (9) and (10). In this way, we have that KB EXP HAP G i.e., a SCIFF derivation provides the set of happened events as output of the abduction process. G The verification of the conformance of a web service P ws to a choreography P chor is performed in two steps. Generation of total agreement interactions. The first step is to build total agreement interactions. This is done by applying the generative version of SCIFF to the abductive program of Def. 1, with the additional integrity constraints (9) and (10), i.e.: KB U, E U, IC tot SCIFFis sound and complete for the so-called allowed programs, i.e., under some syntactic restrictions that are satisfied by the programs of A l LoWS. It can be trivially proven that declaring the H predicate as abducible and adding integrity constraints (such as (9) and (10)) does not undermine the results, so the version exploited in A l LoWS is sound and complete as well. Thanks to these results, the non-failure leaf nodes of the proof tree are exactly the total agreement interactions. Thus, if this computation fails, there exists no total agreement interaction, so the Web Service is not -conformant. Verification of total agreement histories The second step is to perform a SCIFF derivation to check if all the total agreement interactions generated in the previous step respect the conditions in Def. 2. In this phase we adopt the original version of SCIFF, where H is not declared as abducible but it is a predicate defined by the atoms in the set HAP py (generated in the previous step). In this phase, we test that in each non-failure leaf node the (11) and (12) hold. The Web Service is universally conformant to the choreography if and only if such test succeeds on all the non-failure leaf nodes. 5 A CONFORMANCE TEST EXAMPLE We discuss here only the conformance test of the Warehouse, and show how A l LoWS detects non conformance. We start the conformance test by providing as Goal expectations about the events initiating the interaction: G {E chor (request(customer, Supplier, Good), T r), E ws (query(supplier, W arehouse, Good), T q ).} (13) If we apply the generation step described in Sect. 4, we obtain the total agreement interactions. We show one of the outcomes in Fig. 3. Then, by applying the second step of the conformance test, it results that this total agreement set is not -conformant, since the expectation E chor (shiporder(supplier, W arehouse, Good), T s ) does not have the corresponding happened event, as required by Eq. 12. In fact the choreography specifies that Supplier should sent a shipment order to W arehouse; unfortunately, the latter is not able to understand the order (it does not expect such message), and the choreography expectation does not have the corresponding happened event. 6 DISCUSSION AND RELATED WORK While a detailed comparison of the SCIFF language with other languages for web service and choreography specification is, to date, ongoing work, we believe that the language provides sufficient expressivity for a wide range of applications, and ro express typical constructs found in other specification languages, such as WSCDL [17]. For example, the integrity constraints in Eq. (14) 18

23 EXP py HAP py E chor (request(c, S, Good), T r ) H(request(C, S, Good), T r ) E chor (query(s, W, Good), T q ) E ws(query(s, W, Good), T q) H(query(S, W, Good), T q) E chor (answer(w, S, Good, Q), T a ) E ws (answer(w, S, Good, Q), T a ) H(answer(W, S, Good, Q), T a ) E chor (provide(s, C, Q), T p ) H(provide(S, C, Q), T p ) Q > 0 E chor (send(s, C, Bill), T b ) E chor (shiporder(s, W, Good), T s) H(send(S, C, Bill), T b ) E chor (pay(c, S, Bill), T p) H(pay(C, S, Bill), T p) Figure 3. A total agreement interaction generated during the first step of the conformance test. H(a, T a ) E(b, T b ) T b > T a E(c, T c ) T c > T a H(b, T b ) E(a, T a) T a > T b (14) represent a loop, where the event a can be followed by the event b and another occurrence of a, or by an event c. The integrity constraint in Eq. (15) H(a, T a ) c E(b, T b ) T b > T a (15) has an operational semantics similar to that of a workunit: when the event a happens, the expectation b will be raised only when the predicate c, which represents a condition, becomes true. One limitation of our language stands in the current difficulty to represent the operator of unconditional choice in choreographies. An interesting observation is that the SCIFF integrity constraints can be interpreted, both declaratively and operationally, as reactive rules: they specify that, if a state of affairs (i.e., a combination of events) occurs, then an individual web service, or a composition of web services, should behave in some way. We follow Bry and Eckert [6] in believing that reactive rules are a suitable declarative and operational formalism to specify web systems. In particular, out of Bry and Eckert s Twelve Theses, the SCIFF language fulfills: the first ( High level reactive languages are needed on the Web ): the SCIFF languages abstracts away from the low level transport and communication protocols; the second ( Reactive Web rules should be processed locally and act globally through event-based communication ): a SCIFF based web service is only observable in its communicative acts, while the reasoning process that generates those acts remains hidden; the sixth ( A data-driven, incremental evaluation of event queries is the approach of choice ): operationally, the SCIFF integrity constraints are resolved incrementally, while new events are processed. A number of languages for specifying service choreographies and testing a priori and/or run-time conformance have been proposed in the literature. Two examples of these languages are represented by state machines [5] and Petri nets [8]. Our work is highly inspired by Baldoni et al. [3]. We adopt, like them, a Multi-agent Systems point of view, in defining a priori conformance in order to guarantee interoperability. As in [3], we give an interpretation of the a priori conformance as a property that relates two formal specifications: the global one determining the conversations allowed by the choreography and the local one related to the single web service. But, while in [3] a global interaction protocol is represented as a finite state automation, we claim that the formalisms and technologies developed in the area of Computational Logic in providing a declarative representation of the social knowledge, could be applied also in the context of choreographies with respect to the conversation aspects and conformance checking of Web Services. This paper can be considered as a first step in this direction. For example, a difference between our work and [3] can be found in the number of parties as they can manage only 2-party choreographies while we do not impose any limit. We also manage concurrency, which they do not consider at the moment. Another similar work is described in [5]. In this work, authors focus on two-party choreographies involving each one a requester and a provider (named service conversations) and formulate some requirements for a modelling language suitable for them. The requirements include genericity,automated support,and relevance. The authors argue that state machines satisfy these requirements and sketch an architecture of a service conversation controller capable of monitoring messages exchanged between a requester and provider in order to determine whether they conform to a conversation. An example of use of Petri nets for the formalization of choreographies is discussed in [8]. Four different viewpoints (interface behaviour, provider behaviour, choreography, and orchestration) and relations between viewpoints are identified and formalised. These relations are used to perform (global) consistency checking of multiviewpoint service designs thereby providing a formal foundation for incremental and collaborative approaches to service-oriented design. Our proposal is limited to a deep analysis of the relation between choreographies and behaviour interfaces but deal with both a priori and run-time conformance. Foster et al. [11] propose a model-based approach to verify a given service composition can successfully execute a choreography, in particular with respect to the obligations imposed on the services by a choreography. The web service specifications and the choreography are translated to FSP algebra and checked by model checking techniques. The main difference with respect to our work is that Foster et al. check a whole service composition against a choreography, while we only check a single web service, assuming that the others are conformant. Another notable difference is in the adopted formal approaches (abduction in our case, model checking in theirs). In [9, 10], the authors apply a formalism based on computational logic to the a priori conformance in the MAS field. Their formalism is similar to the one we propose, but they restrict their analysis to a particular type of protocols (named shallow protocols). Doing this, they address only 2-party interactions, without the possibility of expressingconditionsoverthecontent oftheexchangedmessages, and without considering concurrency. The use of abduction for verification was also explored in other work. Noteworthily, Russo et al. [16] use an abductive proof procedure for analysing event-based requirements specifications. Their method uses abduction for analysing the correctness of specifications, while our system is more focussed on the check of compliance/conformance of a set of web services. 7 CONCLUSIONS AND FUTURE WORK Starting from theory and tools for protocol conformance and compliance verification in the research area of multi-agent systems, we propose a formal framework based on computational logic (and, in particular, based on abduction) that can be used for specifying behaviour of web services both from a global and local viewpoint. 19

24 In A l LoWS, the use of computational logic provides us a rich and flexible formal language for expressing choreographies. Abductive Integrity Constraints, in fact, differently from traditional formalisms such as finite state-machines, allow us to specify protocols without describing each legal state of the interaction. IC s allow us to include extra integrity constraints that in semi-formal models are usually left implicit. Moreover, the explicit representation of time and other parameters within the constraints allows to handle synchronization, roles and, in general, conditions related to the content of messages. For instance, we have shown that multi-party interactions and concurrency can be tackled. Within the A l LoWS framework we formalize the notion of a priori conformance similar to the one presented in [3], and show that a proof-theoretic approach can be exploited to perform the conformance test. To this purpose a suitable proof procedure derived from SCIFF [2] is presented, in order to automatically check conformance at design-time. In future work, we intend to extend the set of interaction patterns supported by A l LoWS. Moreover, we are aware that exploitation of this language should be based on a graphical language, which supports user convenience in capturing specifications, model verification and validation. To this purpose we are developing a visual language for expressing global protocols and a program for its automatic translation into IC s. Ongoing work on the A l LoWS framework includes a detailed comparison with other languages for choreography specification, such as WS-CDL, and the implementation, and performance tests, of specific case studies. [9] U. Endriss, N. Maudet, F. Sadri, and F. Toni, Protocol conformance for logic-based agents, in Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence, Acapulco, Mexico (IJCAI- 03), eds., G. Gottlob and T. Walsh. Morgan Kaufmann Publishers, (August 2003). [10] Ulle Endriss, Nicolas Maudet, Fariba Sadri, and Francesca Toni, Logic-based agent communication protocols, in Advances in Agent Communication, ed., F. Dignum, volume 2922 of LNAI, , Springer-Verlag, (2004). [11] Howard Foster, Sebastian Uchitel, Jeff Magee, and Jeff Kramer, Model-based analysis of obligations in web service choreography, in Proceedings of the International Conference on Internet and Web Applications and Services (ICIW 2006), Guadeloupe French Caribbean, (2006). IEEE Computer Society Press. [12] T. H. Fung and R. A. Kowalski, The IFF proof procedure for abductive logic programming, Journal of Logic Programming, 33(2), , (November 1997). [13] J. Jaffar and M.J. Maher, Constraint logic programming: a survey, Journal of Logic Programming, 19-20, , (1994). [14] A. C. Kakas, R. A. Kowalski, and F. Toni, Abductive Logic Programming, Journal of Logic and Computation, 2(6), , (1993). [15] A. C. Kakas and P. Mancarella, On the relation between Truth Maintenance and Abduction, in Proceedings of the 1st Pacific Rim International Conference on Artificial Intelligence, PRICAI-90, Nagoya, Japan, ed., T. Fukumura, pp Ohmsha Ltd., (1990). [16] A. Russo, R. Miller, B. Nuseibeh, and J. Kramer, An abductive approach for analysing event-based requirements specifications, in Logic Programming, 18th International Conference, ICLP 2002, ed., P.J. Stuckey, volume 2401 of Lecture Notes in Computer Science, pp , Berlin Heidelberg, (2002). Springer-Verlag. [17] W3C. Web services choreography description language version 1.0. Home Page: Acknowledgements This work has been partially supported by the MIUR PRIN 2005 projects Specification and verification of agent interaction protocols and Vincoli e preferenze come formalismo unificante per l analisi di sistemi informatici e la soluzione di problemi reali. REFERENCES [1] Marco Alberti, Federico Chesani, Marco Gavanelli, Evelina Lamma, Paola Mello, and Paolo Torroni, The SOCS computational logic approach for the specification and verification of agent societies, in Global Computing: IST/FET International Workshop, GC 2004 Rovereto, Italy, March 9-12, 2004 Revised Selected Papers, eds., Corrado Priami and Paola Quaglia, volume 3267 of Lecture Notes in Artificial Intelligence, , Springer-Verlag, (2005). [2] Marco Alberti, Marco Gavanelli, Evelina Lamma, Paola Mello, and Paolo Torroni, The sciff abductive proof-procedure, in Proceedings of the 9th National Congress on Artificial Intelligence, AI*IA 2005, volume 3673 of Lecture Notes in Artificial Intelligence, pp Springer-Verlag, (2005). [3] Matteo Baldoni, Cristina Baroglio, Alberto Martelli, Viviana Patti, and Claudio Schifanella, Verifying the conformance of web services to global interaction protocols: A first step, in EPEW/WS-FM, eds., Mario Bravetti, Leïla Kloul, and Gianluigi Zavattaro, volume 3670 of Lecture Notes in Computer Science. Springer, (2005). [4] A. Barros, M. Dumas, and P. Oaks, A critical overview of the web services choreography description language (ws-cdl), BPTrends, (2005). [5] B. Benattallah, F. Casati, F. Toumani, and R. Hamadi, Conceptual modeling of web service conversations, 2681, , (2003). [6] Franois Bry and Michael Eckert, Twelve theses on reactive rules for the web, in Proceedings of the Workshop on Reactivity on the Web, Munich, Germany, (March 2006). [7] M. Denecker and D. De Schreye, SLDNFA: an abductive procedure for abductive logic programs, Journal of Logic Programming, 34(2), , (1998). [8] R. Dijkman and M. Dumas, Service-oriented design: A multiviewpoint approach, International Journal of Cooperative Information Systems, 13(4), , (2004). 20

25 An Immune System-Inspired Approach for Composite Web Services Reuse Rosanna Bova 1, Salima Hassas 1, Salima Benbernou 1 Abstract. Recently, a growing number of Web Services have emerged at a fast rate. However, there are many situations where individual web services, alone, cannot satisfy user requirements. This has raised the need for service providers and application developers to develop value added services by combining existing web services into composite web services. Web services composition is now a very active research field, and several solutions have been proposed. However, few existing solutions address the issue of service composition reuse and specialization, i.e, how applications can be built upon existing simple or composite web services by reuse, restriction, or extension. In this paper, we introduce the concept of abstract composite web service, that can be specialized to particular concrete compositions, and that can be reused in the construction of larger or extended compositions. We propose an approach based on immune system, in which we combine structural and usage information in order to promote and create stable composite web services, in an affinity maturation process. 1 1 INTRODUCTION A web service is a software system designed to support interoperable machine-to-machine interaction over a network, based on standard web protocols and XML encoding. There has been a great deal of research in the area of web services in the past few years. A large part of these studies has been dedicated to web services composition. The original idea of service composition is not really new in software design. It can be traced back to the simple concept of an executable library in many programming languages, the so-called code reuse. Applying this idea to software engineering, people have developed new applications using previously written software components, leading to software composition or software reuse. In the web services community the efforts are now mainly dedicated to some parts of this problem, which, generally speaking, are the discovery of useful partner web services, their composition and execution. In this paper, we are interested in studying how web services can be composed to provide more complex features, by reusing abstract web service composites that can be specialized to particular concrete composites, or reused in the construction of larger or extended composites. The former type of reuse can be viewed as a descendent approach, while the latter is rather an ascendant approach, where small composites are used in larger composites. This paper aims at defining a composition framework, and a composer system, to achieve these goals semi-automatically. Section 2 gives our problem formulation, illustrated by an example scenario in section 3. Section 4 presents and discusses our approach and the processes involved, along with the immune system metaphor. Section 5 situates it with related works, and section 6 concludes and presents the main perspectives of this work. 1 LIRIS, UMR 5205 CNRS, Université Claude Bernard Lyon 1, bâtiment Nautibus (710), 43 bd du 11 novembre 1918, Villeurbanne, France; {rosanna.bova, salima.hassas, 2 PROBLEM FORMULATION Our goal is to define a web service composer system and framework. This web service composer system will both be able to provide (reuse) and execute a composite web service requested by a user or another system. As such, we assume that the composer system is able to obtain success feedback about the execution of each concrete composite web service. If the composite service is to be executed by an external system, then a feedback mechanism is needed to provide this feedback loop. We consider that a composite service is a web service involving the calling of other web services (composite or elementary) during its execution. These referenced web services are typically structured in a sort of script or program using alternatives (conditions), loops, parallel and sequence operators. For example, a BPEL script defines such a composite web service. From the outside, a composite web service is like a normal executable web service, published and accessed using the same protocols. In the context of our approach, however, we see a composite web service as a grey box, rather than a black box, in the sense that part of its internal structure is accessible to our system. To be able to reason about the similarity or compatibility of functionalities across several web services, we assume that there exists a higher level ontology and description language to describe these functionalities, which goes beyond the mere interface (operations, inputs, outputs) of the web service. Defining such ontology is in itself a difficult problem, but is out of scope of this paper. We also suppose that this ontology is associated with a repository or matching engine, taking in charge the matching of a selection of web services (composite or not) compatible with a given semantic description. Thus, as a simplification, we will consider that a user request for a web service is a web service semantic description (without considering possible additional constraints or user preferences). Our work is rather focused on how composite web services are specialized or reused, than on the matching process with a given semantic description. We will also assume that our system does not create new compositions from scratch, but that compositions already exist, created externally by software designers or architects. Our goal is then to reuse these existing composite web services, and possibly adapt them, by substituting some web services that they reference by other compatible ones. In order to do that, we introduce the concept of abstract composite web service, based on Melloul and Fox s web service high level patterns, in [1]. In an abstract composite, some concrete web services are replaced by web service semantic descriptions, i.e. high level descriptions of the functionalities fulfilled by this web service, using the above mentioned service ontology. The structure of the initial composite service is conserved. Only one web service may be described by an abstract description (in other words, semantic descriptions may not span over several web services slots in their hosting abstract web service). 21

26 The motivations behind this concept are two fold: x First, it allows us to reuse composites by generalization and specialization, by adapting composite services to other contexts, and relieving some context specific constraints; x Second, abstract composites will serve us as a link between structural information and usage information. Concrete composite web service WS1 WS2 WS3 WS4 generalization Figure 1. Abstract and concrete composite web service example Figure 1 shows an abstract composite web service; without detailing structural information, constructed by substituting two web services in a concrete specialization, WS2 and WS3, by two abstract descriptions D2 and D3. However, care must be taken when defining such abstract compositions, as stated by Melloul and Fow in [1]: a too abstract composite may be unusable and useless, as it would contain too little concrete information, and would be very difficult to specialize to a given context. On the contrary, a very specific abstract composite is difficult to reuse in different situations, obviously. There is a compromise between the two, allowing a sufficient but useful reuse, as in any component reusing problematic. Another hypothesis in our work is that concrete composites contain only fixed (resolved at composition time) web service references: they may not for example include in their execution the process of searching web services, and calling them afterward. Finally, our problem is to find the proper and most usable composite web service, that we will define as stable composite web services, with respect to a given user request, and evaluation criteria such as validity, pertinence, availability, and robustness, by utilizing the information about the structure and usage of composite web services and their components. The composite web service proposed by the composer system will either be existing concrete composites, or automatic specializations of abstract composites. 3 TRAVEL AGENCY EXAMPLE Abstract composite web service WS1 We use the classical travel agency example, since it involves several kinds of web services and composite tasks using these web services, in an overall service: planning a journey. Let us define the following web services available, grouped by semantic categories: x Travel ticket booking: plane1 (cover flight from and to London), plane2 (cover flights from and to Paris), plane3 (covers flight from and to Milan and Trento), and eurostar; x Hotel booking: paradise hotel (in London and Paris), coconut hotel (in Milan and Trento); x Car rental: car1 (Milan, Trento), car2 (all cited cities); We assume that we already have a concrete composite web service named TravelToLondonFromParis, as shown in figure 2, which consists in requesting a reservation web service for the eurostar train from Paris to London, followed by the parallel booking of a ParadiseHotel chain in London, and the renting of a car with car2. D2 D3 WS4 The main rounded box represents the composite with its global inputs (period) and outputs (train ticket, hotel booking and car rental). Now let us suppose that there exists an abstract composite web service named TravelToLondon, generalization of our previous concrete composite service, where the eurostar service is replaced by the semantic description TravelTicket, with an input parameter, destination, fixed to the value London (see figure 3), and the paradise hotel by the semantic description HotelBooking. We introduce another abstract composite, named DirectTravel, which is an abstraction of TravelToLondon and also transitively an abstraction of TravelToLondonFromParis, is presented on figure 4. Here all involved web services are replaced by the abstract semantic descriptions TravelTicket, HotelBooking and CarRental, with the global inputs origin, destination, period, and the same outputs as the two previous composite web services. Inputs period Figure 2. UML activity diagram of TravelToLondonFromParis Inputs period origin Figure 3. UML activity diagram of TravelToLondon Inputs period origin dest. Figure 4. UML activity diagram of DirectTravel Inputs period origin dest. eurostar Paris - London Travel- Ticket to London Travel- Ticket paradise hotel car2 Hotel- Booking car2 Travel- Hotel- Booking Car- Rental Hotel- Booking Ticket Direct- Travel Figure 5. UML activity diagram of IndirectTravel Outputs train ticket hotel book. car rental Outputs train ticket hotel book. car rental Outputs train ticket hotel book. car rental Outputs train ticket hotel book. car rental Finally, we consider that there exists a last abstract composite web service named IndirectTravel, which includes a reference to the former abstract composite DirectTravel, as shown on figure 5. A hotel booking may be necessary if the travel spans one night in the intermediate city, represented by the alternative. The sign on DirectTravel indicates that it refers to another abstract composite definition. 22

27 4 IMMUNE SYSTEM INSPIRED APPROACH 4.1 A simplified view of immune systems Figure 6. Pattern recognition of an antigen by B-cells One of the roles of an immune system [2] is to protect our body from attacks of invasive foreign substances. Such a foreign substance is called a pathogen, and is recognized by the immune system as an antigen. The mechanisms used by the immune system for this purpose are: x The Pattern Recognition of foreign antigen in the immune system, that is carried out by receptors on the surface of antibodies released from the immune cells (lymphocytes: B-cells and T-cells). The binding of an antigen to the different antibodies requires that portions of the two structures have complementary shapes that can closely approach each other. The area on an antigen where it has contact with an antibody is called an epitope. The corresponding area on an antibody is called a paratope. The strength of the binding between an antigen and the different antibodies is dependent on the affinity between them. The higher the affinity, the stronger the binding. x The Immune Response, constituted by two kinds of response. A primary response is provoked when the immune system encounters an antigen for the first time. A number of antibodies will be produced by the immune system in response to the infection, which will help to eliminate the antigen from the body. However, after a period of days the level of antibody begins to degrade, until the time when the antigen is encountered again. The secondary immune response is said to be specific to the antigen that first initiated the immune response, and involves the process of affinity maturation (see below). x The Clonal Selection. When antibodies of a B-cell bind with an antigen, the B-cell becomes activated and begins to proliferate. New B-cell clones are produced that are an exact copy of the parent B-cell, but then undergo somatic hypermutation and produce antibodies that are specific to the invading antigen. x The Affinity Maturation process that guarantees that the immune system becomes increasingly better at the task of recognising patterns. After the primary immune response, when the immune system first encounters a foreign substance and the substance has been removed, a certain quantity of B-cells remains and acts as an immunological memory. This is to allow the immune system to launch a faster and stronger attack against the infecting agent, called the secondary immune response. This second, faster response is attributed to memory cells remaining in the immune system, so that when the antigen, or similar antigen, is encountered, a new immunity does not need to be built up, it is already there. This means that the body is ready to better combat any re-infection. 4.2 Immune system metaphor for our web service composition reuse problem Although immune systems have greatly inspired our approach, our ambition is not to define an exact correspondence between the concepts, processes and mechanisms of our model, and those of the immune system. We try to use this metaphor as much as possible, especially when it helps understanding the rational of the model, however some specificities of our problem and solution still don t have a counterpart in immune systems, and vice versa. In our model, different composite web services are proposed to the user in order to answer to a composition request, which represent the antigen aggression. In an immune system, when neutralizing antigens, the immune cells specialize in attacks of this or similar antigens, and become memory cells through a process of affinity maturation. Thus memory cells are specialized cells for one category of antigens. We imagine, in the same manner that an existing selection of concrete composite web services exists, specialized to answer to a category of request. However, as detailed in section 4.5.2, the sole semantic compatibility with the request, managed by the semantic matching mechanism, is not enough to ensure that a given concrete composite will be successful and stable. We claim that there need to be some sort of affinity value, derived from usage feedback information. In the immune system, the process of affinity maturation [3] strongly depends on the processes of specialization and clonal selection of the immune cells, which are based on the affinity value between immune cells and antigens. The higher the affinity value is, the more adequate the immune cells answer is. The affinity value is based on the degree of complementarity between immune cells and antigens. In our model, we define some usage information representing how and how many times web services or concrete composites are used with respect to a given context, represented in turn by a chain of ancestor abstract composites. Then, we define the process of maturation as a process of electing composite web services as stable composite web services. Table 1. Correspondence with immune system Antigen Immune System Pattern recognition User request WS Composer System Semantic matching with existing WS + user choice Affinity Semantic compatibility + Relative and global affinity values Affinity maturation Memory cells B cells Stable composite WS election Stable composite WS Concrete or virtual composite WS Table 1 summarizes these correspondences. Some of the terms used here are detailed in section Model and motivations For achieving our goal, our idea is that structural information should be combined with usage information extracted from the composite web services, in order to promote and possibly publish 23

28 stable and relevant composite web services. Stable composite web services will represent potential building blocks reusable in new compositions corresponding to a category of requests, in the same manner that the memory cells in an immune system react to a certain category of antigens. In particular, the structural information comprises: the composite definition itself, i.e. the structural constructs linking the referenced internal web services together (that we will refer to as the children of this composite), organizing them in a well defined process; plus the generalization relations between composites. The exact structural information available depends on the workflow or process description language used to describe composite web services, however we can always extract the composition dependencies between a composite and its children. The usage information is represented by a metric in our model: the relative affinity value, associated to a relation between each child component of a composite web service and its various abstractions. In the example of section 3, we have a relative affinity relation between: eurostar and TravelToLondon, eurostar and DirectTravel; ParadiseHotel and TravelToLondon, ParadiseHotel and DirectTravel; car2 and DirectTravel. We assume that we have a local repository that contains information for a set of existing composite or simple web services. Formally a web services, composite or simple, is defined as ws = {I, O}, where I is its set of inputs, O its set of outputs. Along with this information stored for each composite or simple web service, we add in the repository some meta-information about the structure the generalization relationships and the composition dependency relationships and about the usage the relative affinity (valued) relationships. Semantic description needs generalization Abstract Composite 1 described by 1 described by 0..* Composite instantiates matches uses Concrete Composite Figure 7. Composite Web Services meta-information model Figure 7 shows a UML class diagram describing our model. For clarity reasons the relative affinity relation is not represented. Also, in theory abstract composites may also contain (use) other abstracts composites (not visible here). It includes notably: x Concrete web service, which is both a composite and a real web service. It uses web services, its children, that may in turn be other concrete composites or elementary web services as well; x Abstract composite, which is a composite but also contains semantic descriptions. The important relations (UML associations) are: x The composition dependency relation, here the uses association; x The generalization relation, represented by both the instantiates and generalization directed associations. 0..* matches Web Service Elementary Web Service 4.4 Application to the Travel Agency example Relations : Composition dependency Usage (relative affinity) Generalization Coconut Hotel TravelToLondon -FromFrance Airplane2 Eurostar Figure 8. Travel agency example: usage and structure meta information Figure 8 represents in a same diagram the composition dependency relations, the generalization relations, and the relative affinity valued relations. For sake of clarity, only the first level of relative affinity relations are shown on here; however these relations are by construction repeated from any abstract composite to its direct ancestor, following the generalization link, plus other ones. Thus, very abstract composites like DirectTravel will have a lot of potential children with which they have a relative affinity. It is important to note on this diagram that relations of different nature are represented, which should not be confused: generalization links, which in the opposite direction represent specialization (or instantiation when leading to a concrete composite), are not of the same nature as composition dependency links. Composition dependency means that a given composite uses another composite or simple web service: this is neither an instantiation nor a specialization. Relative affinity links can be seen as a potential composition in a specialized form of the abstract composite, augmented with an affinity value. With respect to section 3, we have added two new composites, which will serve us later on, TravelToLondonFromFrance and TravelToLondonFromGermany. 4.5 Process DirectTravel TravelToLondon TravelToLondon- FromParis IndirectTravel Car2 Airplane3 TravelToLondon- FromGermany Paradise Hotel Figure 9 represents our system as a global process. In practice, however, some parts of this process are distributed among several agents, the cells of our immune system. The distributed part includes the automatic specialization, the relative affinity update, and the affinity maturation steps. As stated in section 2, we consider that a user request is equivalent to a semantic description, and that there exist a matching mechanism selecting compatible candidates among the composites and simple web services indexed in our repository. 24

29 After that, the composer system performs automatic specialization, which consists of specializing compatible (candidate) abstract composites, using relative affinity as guidance, into potential new concrete composites. This task does not create composites from scratch, but explores new instantiation possibilities. Web Service Composer System automatic specialization affinity maturation update relative affinity values Figure 9. Global process from user request to composite execution Then, the system reorders all composition propositions, including other candidate concrete composites obtained in the semantic matching step, using a global affinity value for each composite, and presents them to the user. The latter chooses one proposition: this choice in itself brings some exploration in the system, since a user is not forced to pick up the best composites. Then, the composite is executed by the system, and the relative affinity values are updated. The affinity maturation will possibly elect the new concrete composites, if used, as stable (i.e. memory cell), and associate it with a semantic description, so that it becomes selectable by the semantic matching engine. It is then usable in subsequent automatic specializations Relative affinity Definition 1 Relative affinity. A value of relative affinity is always associated to a relation between a concrete web service (composite or not), in execution, and the abstract composite of the composite that calls it. This value is updated every time that this concrete web service is executed as a child of one of this abstract s specialization. The relative affinity function is equal to: aff semantic matching ordering and presentation execute CWS freq User request (semantic description) choose a concrete CWS get results succ r ( c, a) (1) frequsage where freq succ is the function measuring the number of times that this concrete web service (c) has been used with success and freq usage is the total number of time that this concrete web service has been used. Figure 10 illustrates the updating mechanism of the relative affinity values between concrete web services (WS1, WS2, WS3, WS4 and Y) and the generalization of the parent composite that calls them. The execution of composite X includes the ordered execution of WS1, WS2 and the composite Y. Once the concrete WS1 has been executed, the relative affinity relation between A X (the abstract composite of X) and WS1 is updated, or generated if it did not exist, with an associated relative affinity value equal to the fraction between the number of successes and the total number of utilizations of WS1 as a child of any specialization of A X. The same happens between the abstract composite A X and the concrete WS2. WS1 A X WS2 Figure 10. Relative affinity update example Since execution of the composite Y includes the execution of WS3 and WS4, a relation between A Y and the two concrete web services WS3 and WS4 are generated or their relative affinity value updated. Then, when the execution of the composite Y terminates, a relation between the abstract composite A X and the composite Y is generated with a numeric value of relative affinity Affinity maturation During the automatic specialization step, new virtual concrete composite may be created by specializing abstract composites compatible with a user request, with different children than those present in existing concrete composites. As long as such a virtual composite is not considered stable by the system, it does not has a proper existence outside the current user session, and is not identified nor associated to a semantic description in the web service repository of the composer system. The only trace of its existence is represented by the global affinity value calculated by the system, also used to order the propositions before presenting them to the user. This global affinity value only depends on the relative affinity values of the various children, and possibly grandchildren, etc., involved in the virtual composite. Definition 2 Global Affinity. The global affinity value represents a weighted average of the various relative affinity values, with respect to the whole ancestor chain of the concrete composite. Its value is given by the following function: aff g ( C ) X n m i 1 j 1 D aff ij n r ( c, a where C is the considered concrete composite, viewed here as a set of n children concrete web services (c 1 c i c n ); m represents the number of ancestor abstract composites of C; a i j are the semantic descriptions of the children concrete web services to instantiate on the j st ancestor, and corresponding to the concrete child c i ; ij are weight values, so that the sum of these weight values referring to the same concrete child web service is equal to one; and aff r (c i, a ij ) is the relative affinity of c i with respect to a i j. Y i A Y WS3 j i ) WS4 (2) 25

30 Definition 3 Virtual composite. A virtual composite is a composite created by the composer system as a result of the automatic specialization step, in response to a user request. This composite is temporary to the session, and does not have an actual existence in the composer system web service repository. Definition 4 Stable composite. A stable composite is a former virtual composite which has been used successfully at least one time with a global affinity value exceeding a predefined threshold (parameter of the composer system), that we refer to as the affinity maturation threshold. A 2 A 1 C a 2 1 c 1 Figure 11. Example of global affinity value calculation In the example of figure 11, the global affinity value is: aff g (C) = ( 11 aff r (c 1, a 1 1 ) + 12aff r (c 1, a 1 2 ) + 21aff r (c 2, a 2 1 ) + 22aff r (c 2, a 2 2 ) + 31aff r (c 3, a 3 1 ) + 32aff r (c 3, a 3 2 )) / n where 12 = = = 1. Once a concrete composite is considered stable, it is stored and indexed in the web service repository, and associated to a semantic description. As a real concrete composite, it can be matched directly to user requests without needing the specialization step, and also becomes eligible as a child concrete web service, inside other composite web services. This stable composite election process is inspired by the immune system s affinity maturation process. Affinity is stimulated by cross-usage, and different existing composites may contribute to the maturation of the same new stable concrete composite. One of its goals is to ensure some sort of long term memory to the system, since stable composites may be kept for an arbitrary long time in the system, even if their global affinity value falls below the affinity maturation threshold for some period Example 2 2 a 2 a a 2 a 3 c 1 c 2 c 3 relative affinity generalization semantic descr. To illustrate specialization and affinity maturation, let us suppose that a user wishes to travel in London from France, in the context of the scenario presented in section 3, and augmented in section 4.4, figure 8. We assume that the user request is compatible with TravelTo- LondonFromFrance. However, we also assume that TravelTo- LondonFromParis and TravelToLondonFromGermany have been used largely with success, so that the relative affinity of Paradise- Hotel with respect to the abstract composite TravelToLondon is higher than that of CoconutHotel. Although the context is slightly different, ParadiseHotel is still applicable to the semantic description HotelBooking included in TravelToLondon. Thus, the composer system will propose a virtual composite with ParadiseHotel instead of CoconutHotel to the user, with a higher rank. Now, let us suppose that the user chooses this virtual composite web service, and that the execution is successful. If the global affinity is greater than the affinity maturation threshold, a new stable concrete composite is created and referenced in the repository, and can now be used directly. 4.6 Exploration / exploitation ratio Apart from the obvious exploration due to the fact that the user chooses among different composite propositions from the system, there is another interesting form of exploration in this process, related to the automatic specialization step. During this phase, the composer system tries to instantiate new concrete composite from existing abstract ones, by substituting some children web services. This exploration is guided by two factors: (i) the existence of abstract composites and (ii) the relative affinity values of potential children web services, with respect to these abstract composites. Abstract composites are supposed to be designed by system administrators or programmers, and influences the way the system will reacts. However the creation and enforcement of new composites, for a given user request, is also highly influenced by the cross-usage of various composites related to these abstract composites, and involving other potential children web services in different contexts. This influence is not limited to the specific usage of the concrete candidate composites directly compatible with the user request. As a consequence, the ratio between exploration and (usage) exploitation is mainly determined by the density and structure of the generalization and composition graphs formed by abstract composites, by the initial distribution of the concrete composites and the elementary web services in the system, and above all by the variability of the requests and choices of users. 4.7 Discussion The semantic level and matching mechanism are not covered in this paper. However the language and ontology used for the semantic descriptions is indeed very important for our approach to work properly: the language used should allow designers to define abstract descriptions, for abstract composites, that remain compatible with more specific descriptions. Additionally, it should allow our system to specialize an abstract description, when an automatic specialization occurs. This specialized description will probably add some constraints related to the parameters fixed in the concrete composite, and also related to the newly associated children web services. Another issue is the relative importance of the children of a given composite. We currently consider them evenly, with the same weight, in the calculation of the global affinity value. Assuming that if any of them fails, the whole composite fails (note that it is not always the case, especially if the composite includes some form of redundancy to increase its robustness), one might consider that the weakest affinity score should be considered. Alternatively, a more complex affinity value calculation, partly based on the workflow structure of the composite, could be investigated. 26

31 The affinity functions used in our approach does not define any absolute confidence value associated to each web service or composite web service, independently of any context. Even the global affinity function is always relative to a context, represented by the chain of ancestor abstract composites and the user request. This design choice might be considered somehow restrictive, and one may consider that the relative affinity values should be combined with an absolute confidence value for each web service, so that usage feedback may be shared more widely across usage contexts. Our motivation is that the success of a web service is often very context sensitive: a single parameter change can condition the success or failure of a request. 5 RELATED WORKS This approach is inspired by the work by Melloul and Fox in [1]. In particular, the abstract composite concept in our model is close to their high level composition patterns. Our contribution mainly adds the automatic specialization process, the affinity relations, and the affinity maturation process. Although we do not use class inheritance explicitly, our generalization/specialization relations suggest an object-oriented inheritance model. In this direction different works can be found in the area of workflow class inheritance. For example, in [4] Bussler presents a framework to analyze the requirements for supporting such workflow class inheritance. Different perspectives of inheritance are discussed and a workflow class definition language is proposed. In [5], Kappel and Lang present a workflow class specification, consisting in a set of object classes and rules. Subclasses and inheritance are supported, at least partially. In [6], Papazoglou and Yang describe a system called TOWE that implements a set of classes providing the basic mechanisms for workflow execution. Other workflow classes can then be developed by inheriting the functionality of basic workflow classes. Our generalization relation, however, is defined by the substitution, in the composite definition, of one or more children concrete web services by semantic descriptions. This relation is derived from the composites structure: it is not a purely additional, higher level classification of existing web service, which would rather correspond to the semantic description level in our case. Our approach differs from other related works about web services composition in that it focuses on reusing existing abstract composites that can be specialized into particular concrete composites, or reused in the construction of larger or extended composites. Finally, we distinguish our work from automatic Web service composition such as in the work by McIlraith et al. [7] on semantic web service composition, and the work by Petrie et al. [8] on web service planning, where the goal is to produce a composition plan. Rather, we start with existing (high-level) plans, and focus on their different possible reuse, by exploiting the combination of crossusage, the affinity relations, and structural meta-information, the composition and generalization relations. 6 CONCLUSION AND FUTURE WORKS In this paper we have proposed an approach to deal with composite web service reuse and automatic specialization by children component substitution, inspired by the human body immune system. The shape correspondence between the antigen epitopes and the antibody paratopes is represented in our system by a relative affinity function, measuring the degree of success of the use of a concrete child web service, within the context of a concrete composite, with respect to a more general abstract composite web service. The associated affinity maturation process allows the emergence of new stable concrete composites, resulting from automatic specializations and the accumulated cross-usage information. These stable concrete composites are then identified and semantically described, as any existing web service in our system. This process, as well as the relative affinity calculation, is of course guided by the definition of meaningful abstract composites, which gives to the composer system administrators a degree of control on the affinity propagation and on the potential automatic specializations proposed by the system. We are currently defining a prototype in order to validate the feasibility of this approach on simple scenarios (work in progress). A perspective is to extend the model to better account for the internal structure of web service composite in the relative affinity function (for example, differentiate redundant and mandatory children). A second perspective is to consider the specificity of the request in our global affinity evaluation: if a request is very specific, it is reasonable to think that the relative affinity values related to the most specific abstract composites are more important than the relative affinity values related to the most abstract ones. On the contrary, a very vague request will not care too much about the former ones, but more about the latter. A long term perspective is to leverage the distributed nature of the immune system model, and its natural tolerance to heterogeneity. Instead of having one global composer system, this approach can scale to a network of interconnected web service composition domains managed by local composer systems. Such composer systems may publish stable or abstract composites to each other, with respect to some diffusion policy. This diffusion would correspond to the spreading and cloning of memory cells into our blood. REFERENCES [1] L. Melloul and A. Fox Reusable Functional Composition Patterns for Web Services, in proceedings of the IEEE International Conference on Web Services (ICWS), San Diego, CA, USA, (2004). [2] L. N. de Castro and F. J. Von Zuben, Artificial Immune Systems: Part I, Basic Theory and Applications, RT DCA Technical Report (1999). [3] Berek, C. and M. Ziegner, The Maturation of the Immune Response, Immunology Today, 14 (8), , (1993). [4] C. Bussler, Workflow class inheritance and dynamic workflow class binding, In Proceedings of the Workshop of Software Architectures for Business Process Management at the 11th Conference on Advanced Information System engineering, Heidelberg, Germany, [5] G. Kappel, P. Lang, S. Rausch-Schott, and W. Retschitzegger, Workflow Management Based on Objects, Rules and Roles, IEEE Data Engineering Bulletin, 18(1), 11 18, (1995). [6] M. P. Papazoglou and J.Yang, Design Methodology for Web Services and Business Processes, in Proceedings of the 3rd VLDB-TES Workshop, Hong Kong, Also in LNCS, 2444, Springer, (2002). [7] S. A. McIlraith, T. C. Son, and H. Zeng, Semantic Web Services, IEEE Intelligent Systems, Special Issue on the Semantic Web, 16 (2), 46-53, (2001). [8] C. Petrie, M. Genesereth, H. Bjornsson, R. Chirkova, M. Ekstrom, H. Gomi, T. Hinrichs, R. Hoskins, M. Kassoff, D. Kato, K. Kawazoe, J. U. Min, and W. Mohsin, Adding AI to Web Services, Agent Mediated Knowledge Management, LNAI, 2926, Springer, , (2004). 27

32 Designing Security Requirements Models through Planning Volha Bryl and Fabio Massacci and John Mylopoulos and Nicola Zannone 1 Abstract. The quest for designing secure and trusted software has led to refined Software Engineering methodologies that rely on tools to support the design process. Automated reasoning mechanisms for requirements and software verification are by now a well-accepted part of the design process, and model driven architectures support the automation of the refinement process. We claim that we can further push the envelope towards the automatic exploration and selection among design alternatives and show that this is concretely possible for Secure Tropos, a requirements engineering methodology that addresses security and trust concerns. In Secure Tropos, a design consists of a network of actors (agents, positions or roles) with delegation/permission dependencies among them. Accordingly, the generation of design alternatives can be accomplished by a planner which is given as input a set of actors and goals and generates alternative multi-agent plans to fulfill all given goals. We validate our claim with a case study using a state-of-the-art planner. 1 Introduction The design of secure and trusted software that meets stakeholder needs is an increasingly hot issue in Software Engineering (SE). This quest has led to refined Requirements Engineering (RE) and SE methodologies so that security concerns can be addressed during the early stages of software development (e.g. Secure Tropos vs i*/tropos, UMLsec vs UML, etc.). Moreover, industrial software production processes have been tightened to reduce the number of existing bugs in operational software systems through code walkthroughs, security reviews etc. Further, the complexity of present software is such that all methodologies come with tools for automation support. The tricky question in such a setting is what kind of automation? Almost fifty years ago the idea of actually deriving code directly from the specification (such as that advocated in [22]) started a large programme for deductive program synthesis, 2 that is still active now [5, 11, 25, 29]. However, proposed solutions are largely domainspecific, require considerable expertise on the part of their users, and in some cases do not actually guarantee that the synthesized program will meet all requirements stated up front [11]. Another approach is to facilitate the work of the designer by supporting tedious aspects of software development by automating the design refinement process. This approach underlies Model Driven Architectures (MDA) [27], which focuses on the (possibly auto- 1 University of Trento, Italy, 2 A system goal together with a set of axioms are specified in a formal specification language. Then the system goal is proved from the axioms using a theorem prover. A program for achieving the goal is extracted from the proof of the theorem. matic) transformation from one system model to another. Tools supporting MDA exist and are used in the Rational Unified Process for software development in UML. Yet, the state-of-the-art is still not satisfactory [30]. Such approaches only cover part of the work of the designer. We advocate that there is another activity where the support of automation could be most beneficial [20]: Exploring alternative options is at the heart of the requirements and design processes. Indeed, in most SE methodologies the designer has tools to report and verify the final choices (be it goal models in KAOS, UML classes, or Java code), but not actually the possibility of automatically exploring design alternatives (i.e. the potential choices that the designer may adopt for the fulfillment of system actors objectives) and finding a satisfactory one. Conceptually, this automatic selection of alternatives is done in deductive program synthesis: theorem provers select appropriate axioms to establish the system goal. Instead, we claim that the automatic selection of alternatives should and indeed can be done during the very early stages of software development. After all, the automatic generation of alternatives is most beneficial and effective during these stages. There are good reasons for this claim. Firstly, during early stages the design space is large, and a good choice can have significant impact on the whole development project. Supporting the selection of alternatives could lead to a more thorough analysis of better quality designs with respect to security and trust. Secondly, requirements models are by construction simpler and more abstract than implementation models (i.e. code). Therefore, techniques for automated reasoning about alternatives at the early stages of the development process may succeed where automated software synthesis failed. Since our overall goal is to design a secure system we have singled out the Secure Tropos methodology [16] as the target for our work. Its primitive concepts include those of Tropos and i* [7], but also concepts that address security concerns, such as ownership, permission and trust. Further, the framework already supports the designer with automated reasoning tools for the verification of requirements as follows: 1. Graphical capture of the requirements for the organization and the system-to-be, 2. Formal verification of the functional and security requirements by completion of the model drawn by the designer with axioms (a process hidden to the designer); checking the model for the satisfaction of formal properties corresponding to specific security or design patterns. 28

33 In this framework (as in many other similar RE and SE frameworks) the selection of the alternatives is left to the designer. We will show that we can do better. Indeed, in Tropos (resp. Secure Tropos) requirements are conceived as networks of functional dependencies (resp. delegation of execution) among actors (organizational/human/software agents, positions and roles) for goals, tasks and resources. Every dependency (resp. delegation of execution) also involves two actors, where one actor depends on the other for the delivery of a resource, the fulfillment of a goal, or the execution of a task. Intuitively, these can be seen as actions that the designer has ascribed to the members of the organization and the system-to-be. As suggested by Gans et al. [14] the task of designing such networks can then be framed as a planning problem for multi-agent systems: selecting a suitable possible design corresponds to selecting a plan that satisfies the prescribed or described goals of human or system actors. Secure Tropos adds to the picture also the notion of delegation of permission and various notions of trust. In this paper we show that it is possible to use an off-the-shelf planner to select among the potential dependencies the actual ones that will constitute the final choice of the requirements engineer. If a planner is already able to deliver good results then this looks a promising avenue for transferring the technique to complex industry-level case studies where a customized automated reasoning tool might be very handy. At the same time, if the problem is not trivial, not all planners will be able to deliver and indeed this turned out to be the case. The techniques we use are sufficiently powerful to cope with security requirements as well as functional requirements, but we concentrate here on their applicability to a security setting where an automated support for the selection of potentially conflictingalternatives is more urgent. The application of the same planning techniques to the overall software development phases can be found in [3]. In this work we have not focused on optimal designs: after all, human designersdo not aim foroptimality in their designs. As noted by Herbert Simon in his lecture on a Scienceof Design [31] what makes humans effective (in comparison to machines) is their ability to identify a satisficing design as opposed to an optimal one. Of course, we assume that the designer remains in the loop: designs generated by the planner are suggestions to be refined, amended and approved by the designer. The planner is a(nother) support tool intended to facilitate the design process. The rest of the paper is structured as follows. Section 2 explains Secure Tropos concepts and describes the requirements verification process. In Sections 3, 4 and 5 the planning approach to the system design is introduced and explained, while in Section 6 the implementation of our approach is presented. Finally, in Sections 7 and 8 a brief overview of related work is presented and conclusions are drawn. 2 Secure Tropos Secure Tropos [16] is a RE methodology for modeling and analyzing functional and security requirements, extending the Tropos methodology [7]. This methodology is tailored to describe both the systemto-be and its organizational environment starting with early phases of the system development process. The main advantage of this approach is that one can capture not only the what or the how, but also the why a security mechanism should be included in the system design. In particular, Secure Tropos deals with business-level (as opposed to low-level) security requirements. The focus of such requirements includes, but is not limited to, how to build trust among different partners in a virtual organization and trust management. Although their name does not mention security, they are generally regarded as part of the overall security framework. Secure Tropos uses the concepts of actor, goal, task, resource and social relations for defining entitlements, capabilities and responsibilities of actors. An actor is an intentional entity that performs actions to achieve goals. A goal represents an objective of an actor. A task specifies a particular sequence of actions that should be executed for satisfying a goal. A resource represents a physical or an informational entity. Actors desires, entitlements, capabilities and responsibilities are defined through social relations. In particular, Secure Tropos supports requesting, ownership, provisioning, trust, and delegation. Requesting identifies desires of actors. Ownership identifies the legitimate owner of a goal, a task or a resource, that has full authority on access and disposition of his possessions. Provisioning identifies actors who have the capabilities to achieve a goal, execute a task or deliver a resource. We demonstrate the use of these concepts through the design of a Medical IS (Information System) for the payment of medical care. 3 Example 1 The Health Care Authority (HCA) is the owner of the goal provide medical care; that is, it is the only one that can decide who can provide it and through what process. On the other hand, Patient wants this goal fulfilled. This goal can be AND-decomposed into two subgoals: provisioning of medical care and payment for medical care. The Healthcare Provider has the capability for the provisioning of medical care, but it should wait for authorization from HCA before doing it. Delegation of execution is used to model situations where an actor (the delegator) delegates the responsibilities to achieve a goal, execute a task, or deliver a resource to another actor (the delegatee) since he does not have the capability to provide one of above by himself. It corresponds to the actual choice of the design. Trust of execution represents the belief of an actor (the trustor) that another actor (the trustee) has the capabilities to achieve a goal, execute a task or deliver a resource. Essentially, delegation is an action due to a decision, whereas trust is a mental state driving such decision. Tropos dependency can be defined in terms of trust and delegation [17]. Thus, a Tropos model can be seen as a particular Secure Tropos model. In order to model both functional and security requirements, Secure Tropos introduces also relations involving permission. Delegation of permission is used when in the domain of analysis there is a formal passage of authority (e.g. a signed piece of paper, a digital credential, etc.). Essentially, this relation is used to model scenarios where an actor authorizes another actor to achieve a goal, execute a task, or deliver a resource. It corresponds to the actual choice of the design. Trust of permission represents the belief of an actor that another actor will not misuse the goal, task or resource. Example 2 The HCA must choose between different providers for the welfare management for executives of a public institution. Indeed, since they have a special private-law contract, they can qualify for both the INPDAP and INPDAI 4 welfare schemes. The IN- PDAP scheme requires that the Patient partially pays for medical care (with a ticket) and the main cost is directly covered by the HCA. On the contrary, the INPDAI scheme requires that the Patient pays 3 An extended description of the example is provided in [4]. 4 INPDAP (Istituto Nazionale di Previdenza per i Dipendenti dell Amministrazione Pubblica) and INPDAI (Istituto Nazionale di Previdenza per i Dirigenti di Aziende Industriali) are two Italian national welfare institutes. 29

34 Figure 1. Secure Tropos model in advance the full cost of medical care and then gets the reimbursement. Once an institution has decided the payment scheme, this will be part of the requirements to be passed onto the next stages of system development. Obviously, the choice of the alternative may have significant impacts on other parts of the design. Figure 1 summarizes Examples 1 and 2 in terms of a Secure Tropos model. In this diagram, actors are represented as circles and goals as ovals. Labels O, P and R are used for representing ownership, provisioning and requesting relations, respectively. Finally, we represent trust of permission and trust of execution relationships as edges respectively labelled Tp and Te. Once the modeling phase is concluded, Secure Tropos provides mechanisms for the verification of the model [16]. This means that the design process iterates over the following steps: model the system; translate the model into a set of clauses (this is done automatically); verify whether appropriate design or security patterns are satisfied by the model. Through this process, we can verify the compliance of the model with desirable properties. For example, it can be checked whether the delegator trusts that the delegatee will achieve a goal, execute a task or deliver a resource (trust of execution), or will use a goal, task or resource correctly (trust of permission). Other desirable properties involve verifying whether an actor who requires a service, is confident that it will be delivered. Furthermore, an owner may wish to delegate permissions to an actor only if the latter actually does need the permission. For example, we want to avoid the possibility of having alternate paths of permission delegations. Secure Tropos provides support for identifying all these situations. Secure Tropos has been used for modeling and analyzing real and comprehensive case studies where we have identified vulnerabilities affecting the organizational structure of a bank and its IT system [24], and verified the compliance to the Italian legislation on Privacy and Data Protection by the University of Trento [23]. 3 Design as Planning So far the automated reasoning capabilities of Secure Tropos are only able to check that subtle errors are not overlooked. This is rather unsatisfactory from the point of view of the designer. Whereas he may have a good understanding of possible alternatives, he may not be sure which is the most appropriate alternative for the case at hand. This is particularly true for delegations of permission that need to comply with complex privacy regulations (see [23]). Example 3 Figures 2(a) and 2(c) present fragments of Figure 1, that point out the potential choices of the design. The requirements engineer has identified trust relations between thehca and INPDAP and INPDAI. However, when passing the requirements onto the next stage only one alternative has to be selected because that will be the system that is chosen. Figures 2(b) and 2(d) present the actual choices corresponding to the potential choices presented in Figures 2(a) and 2(c), respectively. Here, we want to support the requirements engineer in the selection of the best alternative by changing the design process as follows: Requirements analysis phase System actors along with their desires, capabilities and entitlements, and possible ways of goal decomposition are identified. Trust relationships among actors both in terms of execution and permission are defined. Design phase The space of design alternatives is automatically explored to identify delegation of execution/permission. Dependingon the time/importanceofthe goal thedesigner may settle for satisficing solutions [31] or ask for an optimal solution. To support the designer in the process of selecting the best alternative we advocate a planning approach which recently has proved to be applicable in the field of automatic Web service composition [6]. 30

35 (a) Potential choices (b) Actual choice often used to describe the planning domain with conjunctions of literals 5 specifying the states of the system. We find this representation particularly useful for modeling real case studies. Indeed, when considering security requirements at enterprise level, one must be able to reason both at the class level (e.g. the CEO, the CERT team member, the employee of the HR department) and at the instance level (e.g. John Doe and Mark Doe playing those roles). The planning domain language should provide support for specifying: the initial state of the system, the goal of the planning problem, the actions that can be performed, the axioms of background theory. Table 1 presents the predicates used to describe the initial state of the system in terms of actor and goal properties, and social relations among actors. We use Figure 2. (c) Potential choices (d) Actual choice Design Alternatives The basic idea behind the planning approach is to automatically determine the course of actions (i.e. a plan) needed to achieve a certain goal where an action is a transition rule from one state of the system to another [34, 28]. Actions are described in terms of preconditions and effects: if the precondition is true in the current state of the system, then the action is performed. As consequence of the action, the system will be in a new state where the effect of the action is true. Thus, once we have described the initial state of the system, the goal that should be achieved (i.e. the desired final state of the system), and the set of possible actions that actors can perform, the solution of the planning problem is the (not necessarily optimal) sequence of actions that allows the system to reach the desired state from the initial state. In order to cast the design process as a planning problem, we need to address the following question: which are the actions in a software design? When drawing the Secure Tropos model, the designer assigns the execution of goals from one actor to another, delegates permission and last but not least identifies appropriate goal refinements among selected alternatives. These are the actions to be used by the planner in order to fulfill all initial actor goals. 4 Planning Domain The planning approach requires a specification language to represent the planning domain and the states of the system. Different types of logics could be applied for this purpose, e.g. first order logic is AND/OR decomposition to describe the possible decomposition of a goal; provides, requests and owns to indicate that an actor has the capabilities to achieve a goal, desires the achievement of a goal, and is the legitimate owner of a goal, respectively; trustexe and trustper to represent trust of execution and trust of permission relations, respectively. The desired state of the system (or goal of the planning problem) is described through the conjunction of predicates done derived from the requesting relation in the initial state. Essentially, for each request(a,g) we need to derive done(g). By contrast, an action represents an activity to accomplish a goal. We list them in Table 2 and define them in terms of preconditions and effects as follows. Satisfy. The satisfaction of goals is an essential action. Following the definition of goal satisfaction given in [16], we say that an actor satisfies a goal only if the actor wants and is able to achieve the goal, and last but not least he is entitled to achieve it. The effect of this action is the fulfillment of the goal. DelegateExecution. An actor may not have enough capabilities to achieve assigned goals by himself, and so he has to delegate their execution to other actors. We represent this passage of responsibilities through action DelegateExecution. It is performed only if the delegator requires the fulfillment of the goal and trusts that the delegatee will achieve it. Its effect is that the delegator does not worry any more about the fulfillment of this goal after delegating it since he has delegated its execution to a trusted actor. Furthermore, the delegatee takes the responsibility for the fulfillment of the goal and so it becomes a his own desire. Notice that we do not care how the delegatee satisfies the goal (e.g. by his own capabilities or by further delegation). It is up to the delegatee to decide it. DelegatePermission. In the initial state of the system, only the owner of a goal is entitled to achieve it. However, this does not mean that he wants it or has the capabilities to achieve it. On the contrary, in the system there may be some actors that want that goal and others that can achieve it. Thus, the owner could decide to authorize trusted actors to achieve the goal. The formal passage of authority takes place when the owner issues a certificate 5 Let p be a predicate symbol with arity n, and t 1,..., t n be its corresponding arguments. p(t 1,..., t n ) is called an atom. The expression literal denotes an atom or its negation. 31