Lectures. Lecture content. Lecture goals. TDDD10 AI Programming Agents and Agent Architectures Cyrille Berger

Similar documents
CHAPTER 3: DEDUCTIVE REASONING AGENTS. An Introduction to Multiagent Systems. mjw/pubs/imas/

LECTURE 4: DEDUCTIVE REASONING AGENTS. An Introduction to Multiagent Systems CIS 716.5, Spring 2006

COMP310 MultiAgent Systems. Chapter 3 - Deductive Reasoning Agents

LECTURE 4: PRACTICAL REASONING AGENTS. An Introduction to Multiagent Systems CIS 716.5, Spring 2010

Reactive and Hybrid Agents. Based on An Introduction to MultiAgent Systems and slides by Michael Wooldridge

COMP310 Multi-Agent Systems Chapter 4 - Practical Reasoning Agents. Dr Terry R. Payne Department of Computer Science

ICS 606. Intelligent Autonomous Agents 1

LECTURE 2: INTELLIGENT AGENTS. An Introduction to Multiagent Systems CIS 716.5, Spring 2005

LECTURE 3: DEDUCTIVE REASONING AGENTS

Software Architecture. Lecture 4

MAY 2009 EXAMINATIONS. Multiagent Systems

Agent Architectures & Languages. Heikki Helin

1.1 Jadex - Engineering Goal-Oriented Agents

Introduction to Intelligent Agents

Agent-Oriented Software Engineering

Chapter 2 Overview of the Design Methodology

The following are just some of the many possible definitions that can be written:

Plexil-Like Plan Execution Control in Agent Programming

Agent-Oriented Programming

Lecture 18: Planning and plan execution, applications

Software Architecture--Continued. Another Software Architecture Example

Object-Oriented Programming and Laboratory of Simulation Development

CSCI 445 Amin Atrash. Control Architectures. Introduction to Robotics L. Itti, M. J. Mataric

Incremental development A.Y. 2018/2019

PART 1 GRAPHICAL STRUCTURE

Autonomous Vehicles:

Autonomous Vehicles:

Robotics. CSPP Artificial Intelligence March 10, 2004

Mobile Robots: An Introduction.

COMPARING AGENT ORIENTED PROGRAMMING VERSUS OBJECT- ORIENTED PROGRAMMING

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5

Programming Cognitive Agents in Goal. Draft. c Koen V. Hindriks

A multi-threaded architecture for cognitive robotics. Keith Clark Imperial College London

Dialogue systems. Volha Petukhova Saarland University

From Objects to Agents: The Java Agent Middleware (JAM)

Bridges To Computing

Verification of Multiple Agent Knowledge-based Systems

Profiling and optimization for Android applications on the tatami platform

CHAPTER 16: ARGUING Multiagent Systems. mjw/pubs/imas/

Cognitive Walkthrough. Francesca Rizzo 24 novembre 2004

ISO/IEC/ IEEE INTERNATIONAL STANDARD. Systems and software engineering Architecture description

Minsoo Ryu. College of Information and Communications Hanyang University.

Mobile robots control architectures

Chapter 5 INTRODUCTION TO MOBILE AGENT

SOME TYPES AND USES OF DATA MODELS

Dublin Bogtrotters : agent herders

Browsing the World in the Sensors Continuum. Franco Zambonelli. Motivations. all our everyday objects all our everyday environments

A MULTI-ROBOT SYSTEM FOR ASSEMBLY TASKS IN AUTOMOTIVE INDUSTRY

Modeling and Simulating Social Systems with MATLAB

THINGS YOU NEED TO KNOW ABOUT USER DOCUMENTATION DOCUMENTATION BEST PRACTICES

B. H. Gardi College of Engineering & Technology, RAJKOT Department of Master of Computer Application. MCA Lecturer At GARDI VIDYAPITH RAJKOT.

International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.7, No.3, May Dr.Zakea Il-Agure and Mr.Hicham Noureddine Itani

Problem Solving Agents Solving Problems through Search CIS 32

Distributed Systems Programming (F21DS1) Formal Verification

Cryptography and Network Security. Prof. D. Mukhopadhyay. Department of Computer Science and Engineering. Indian Institute of Technology, Kharagpur

Complexity. Object Orientated Analysis and Design. Benjamin Kenwright

Chapter 3. Design of Grid Scheduler. 3.1 Introduction

CONCLUSIONS AND RECOMMENDATIONS

Agent Oriented Software Engineering. Michael Winikoff and Lin Padgham

TERMINOLOGY MANAGEMENT DURING TRANSLATION PROJECTS: PROFESSIONAL TESTIMONY

Localization and Map Building

Cybersecurity. Quality. security LED-Modul. basis. Comments by the electrical industry on the EU Cybersecurity Act. manufacturer s declaration

Best practices in IT security co-management

5/9/2014. Recall the design process. Lecture 1. Establishing the overall structureof a software system. Topics covered

Triadic Formal Concept Analysis within Multi Agent Systems

CONTRIBUTION TO THE INVESTIGATION OF STOPPING SIGHT DISTANCE IN THREE-DIMENSIONAL SPACE

Considering a Services Approach for Data Quality

INTELLIGENT SYSTEMS OVER THE INTERNET

Analysis and Design with the Universal Design Pattern

Visual Layout of Graph-Like Models

Concurrent & Distributed Systems Supervision Exercises

Agent controlled traffic lights

Programming Languages

Exploring Factors Influencing Perceived Usefulness and Its Relationship on Hospital Information System End User Satisfaction

Autonomous Navigation in Unknown Environments via Language Grounding

ADVANCED SOFTWARE DESIGN LECTURE 4 SOFTWARE ARCHITECTURE

Chapter 3. Set Theory. 3.1 What is a Set?

Object-Oriented Analysis and Design Prof. Partha Pratim Das Department of Computer Science and Engineering Indian Institute of Technology-Kharagpur

Autonomic Computing. Pablo Chacin

Kernel Korner AEM: A Scalable and Native Event Mechanism for Linux

Agent-Based Systems. Agent-Based Systems. Michael Rovatsos. Lecture 13 Argumentation in Multiagent Systems 1 / 18

FAQ: Database Development and Management

Business Process Framework (etom)

Introduction to Software Engineering

Recommended Practice for Software Requirements Specifications (IEEE)

DESIGN AS RISK MINIMIZATION

By: Chaitanya Settaluri Devendra Kalia

Transaction Processing in a Mobile Computing Environment with Alternating Client Hosts *

A Distributed Multi-Agent Meeting Scheduler System

Opinion 02/2012 on facial recognition in online and mobile services

COMP 388/441 HCI: 09 - Balancing Function and Fashion Balancing Function and Fashion

Managing Data Resources

6.001 Notes: Section 8.1

Partially Observable Markov Decision Processes. Mausam (slides by Dieter Fox)

The Eight Rules of Security

A Comparison of the Booch Method and Shlaer-Mellor OOA/RD

Multi-Agent Programming

Software Testing CS 408

Building Object-Agents from a Software Meta-Architecture

Goal-Based Agents Problem solving as search. Outline

Transcription:

Lectures TDDD0 AI Programming Agents and Agent Architectures Cyrille Berger AI Programming: Introduction Introduction to RoboRescue Agents and Agents Architecture Communication Multiagent Decision Making Cooperation And Coordination Cooperation And Coordination Machine Learning 9 Knowledge Representation 0 Putting It All Together / Lecture goals Acquire knowledge on what is an agent. Acquire knowledge on what are the different agent architecture and how they take decision Lecture content Agents An Overview of Decision Making Agent Architectures Deliberative Architecture Reactive Architecture State Machines Hybrid Architecture Summary / /

What is an agent? Agents Agents are autonomous: capable of acting independently and exhibiting control over their internal state An agent is a (computer) system that is situated in some environment and that is capable of autonomous action in this environment in order to meet its delegated objectives. What is an agent? What is an agent? Should an agent be able to learn? Should an agent be intelligent?

Intelligent Agent Properties Reactivity Intelligent agents are able to perceive their environment, and respond in a timely fashion to changes that occur in it in order to meet its delegated objectives. Proactivity Intelligent agents are able to exhibit goal-directed behavior by taking the initiative in order to meet its delegated Social Ability Intelligent agents are capable of interacting (cooperating, coordinating and negotiating) with other agents (and possible humans) in order to meet its delegated objectives. Social Ability Cooperation is working together as a team to achieve a shared goal. Often prompted either by the fact that no one agent can achieve the goal alone, or that cooperation will obtain a better result (e.g., get result Coordination is managing the interdependencies between activities. Negotiation is the ability to reach agreements on matters of common interest. Typically involves offer and counter-offer, with compromises made by participants. 9 0 Agents as Intentional Systems Object Oriented Programming vs Multi-Agent System The philosopher Daniel Dennett coined the term intentional system to describe entities whose behaviour can be predicted by the method of attributing belief, desires and rational acumen. Is it legitimate or useful to attribute beliefs, desires, and so on, to computer systems? With very complex systems a mechanistic explanation of its behaviour may not be practical or available. But, the more we know about a system, the less we need to rely on animistic, intentional explanations of its behaviour. As computer systems become ever more complex, we need more powerful abstractions and metaphors to explain their operation low level explanations become impractical. The intentional stance is such an abstraction, which provide us with a convenient and familiar way of describing, explaining, and predicting the behaviour of complex systems. Object-Oriented Programming Objects are passive, i.e. an object has no control over method invocation Objects are designed for a common goal Typically integrated into a single thread Multi-Agent Systems Agents are autonomous, i.e. pro-active Agents can have diverging goals, e.g. coming from different organizations Agents have own thread of control

Agent-Oriented Programming Machine Language Structured Programming Object- Oriented Programming Structural unit Program Subroutine Object Agent Relation to previous level Bound unit of program Subroutine + persistent local state Agent-Oriented Programming Object + independent thread of control + initiative Agent Oriented Programming (Yoav Shoham) Based on the agent definition: An agent is an entity whose state is viewed as consisting of mental components such as beliefs, capabilities, choices, and commitments. The mental constructs will appear in the programming language itself. The semantics will be related to the semantics of the mental constructs. A computation will consist of agents performing speech-acts on each other. AGENT0 (/) AGENT0 is implemented in LISP Each agent in AGENT0 has components: a set of capabilities (things the agent can do) a set of initial beliefs a set of initial commitments (things the agent will do) a set of commitment rules The key component, which determines how the agent acts, is the commitment rule set AGENT0 (/) Each commitment rule contains a message condition a mental condition an action On each agent cycle... The message condition is matched against the messages the agent has received The mental condition is matched against the beliefs of the agent If the rule fires, then the agent becomes committed to the action (the action gets added to the agent s commitment set)

Example AGENT0 Rule One rule could be: if I receive a message from agent which requests me to do action at time, and I believe that: agent is currently a friend I can do the action At time, I am not committed to doing any other action then commit to doing action at time Example AGENT0 Rule COMMIT( ( agent, REQUEST, DO(time, action) ), ;;; msg condition ( B, [now, Friend agent] AND CAN(self, action) AND NOT [time, CMT(self, anyaction)] ), ;;; mental condition self, DO(time, action) ) An Overview of Decision Making Individual decision making Explicit decision making Decision trees Rules Automata Single agent task specification languages Decision theoretic decision making Markov Decision Processes (MDP) Partially Observable Markov Decision Processes (POMDP) Declarative (logic-based) decision making Theorem Proving Planning Constraint satisfaction 0

Multiagent decision making Explicit Mutual modeling Norms Organizations and Roles Multiagent task specification languages Decision theoretic Decentralized POMDPs (Dec-POMDP) Game theoretic Auctions Declarative Multiagent planning Distributed constraint satisfaction Agent Architectures Agent Architectures [A] particular methodology for building [agents]. It specifies how the agent can be decomposed into the construction of a set of component modules and how these modules should be made to interact. (P. Maes 99) Agent Architectures Three types: deliberative (symbolic/logical) reactive hybrid.

Deliberative Architecture Deliberative Architecture We define a deliberative (or reasoning) agent or agent architecture to be one that: contains an explicitly represented, symbolic model of the world and makes decisions via symbolic reasoning. Views agents as knowledge-based systems. We can say that a deliberative agent makes an action in three steps: Sense Plan Act Practical Reasoning Practical reasoning is a matter of weighing conflicting considerations for and against competing options, where the relevant considerations are provided by what the agent desires/values/cares about and what the agent believes. Bratman Practical Reasoning Human practical reasoning consists of two activities: deliberation - deciding what state of affairs we want to achieve; means-ends reasoning - deciding how to achieve these states of affairs. The output of deliberation is intentions.

Intentions (/) Agents need to determine ways of achieving intentions. If I have an intention to φ you would expect me to devote resources to deciding how to bring about Intentions provide a filter for adopting other intentions, which must not conflict. If I have an intention to φ, you would not expect me to adopt an intention ψ such φ and ψ are mutually exclusive. Intentions (/) Agents believe their intentions are possible. An agent believes there is at least some way that the intentions could be brought about. Agents do not believe they will not bring about their intentions. It would not be rational of me to adopt an intention to φ if I believed φ was not possible. 9 0 Intentions (/) Under certain circumstances, agents believe they will bring about their intentions. It would not normally be rational of me to believe that I would bring my intentions about; intentions can fail. Moreover, it does not make sense that if I believe φ is inevitable that I would adopt it as an intention. Agents track the success of their intentions, and are inclined to try again if their attempts fail. If an agent's first attempt to achieve φ fails, then all other things being equal, it will try an alternative plan to achieve φ. Intentions (/) Agents need not intend all the expected side effects of their intentions. If I believe φ ψ and I intend that φ, I do not necessarily intend ψ also. (Intentions are not closed under implication.) This last problem is known as the side effect or package deal problem. I may believe that going to the dentist involves pain, and I may also intend to go to the dentist - but this does not imply that I intend to suffer pain!

Intentions - Summary Intentions drive means-ends reasoning Intentions persist Intentions constrain future deliberation Intentions influence beliefs upon which future practical reasoning is based Means-Ends Reasoning Given: a representation of goal/intention to achieve a representation of actions it can perform a representation of the environment generate a plan to achieve the goal. Agent Control Loop Version Deliberation (/) while true observe the update internal world deliberate about what intentions to achieve next use means-ends reasoning to get a plan for the intention execute the while true do get next percept p; I := deliberate(b); P := plan(b,i); execute(p); How does an agent deliberate? begin by trying to understand what the options available to you are choose between them, and commit to some. Chosen options are then intentions.

Deliberation (/) Agent Control Loop Version The deliberate function can be decomposed into two distinct functional components: option generation in which the agent generates a set of possible alternatives represent option generation via a function, options, which takes the agent s current beliefs and current intentions, and from them determines a set of options (= desires). filtering in which the agent chooses between competing alternatives, and commits to achieving them. In order to select between competing options, an agent uses a filter function. while true do get next percept p; I := deliberate(b); P := plan(b,i); execute(p); while true do get next percept p; D := options(b,i); I := filter(b,d,i); P := plan(b,i); execute(p); Example of Logic Based Agent (/) Cleaning robot with: Percepts p = {dirt, X, Y} Actions A = {turnright, forward, suck...} Start: (0,0,North) Goal: searching and cleaning dirt Example of Logic Based Agent (/) Beliefs B are: {dirt, 0, } {dirt,, } {pos, 0, 0, East} Options D: {clean, 0, } {clean,, } 9 0

Example of Logic Based Agent (/) After filtering the intention is: {clean, 0, } Plan P: {turnright, forward, forward, suck} Commitment Strategies Blind commitment a blindly committed agent will continue to maintain an intention until it believes the intention has actually been achieved. Blind commitment is also sometimes referred to as fanatical commitment. Single-minded commitment a single-minded agent will continue to maintain an intention until it believes that either the intention has been achieved, or else that it is no longer possible to achieve the intention. Open-minded commitment an open-minded agent will maintain an intention as long as it is still believed possible. Commitment Agent Control Loop Version An agent has commitment both to ends (i.e. of wishes to bring about), and means (i.e., the mechanism via which the agent wishes to achieve the state of affairs). Currently, our agent control loop is overcommitted, both to means and ends. Modification: replan if ever a plan goes wrong. while true do get next percept p; D := options(b,i); I := filter(b,d,i); P := plan(b,i); execute(p); while true do get next percept B := D := I := while not empty(p) do a := first(p); execute(a); P := rest(p); 9 0 if not sound(p,b,i) then P := plan(b,i);

Commitment Agent Control Loop Version Still overcommitted to intentions: never stops to consider whether or not its intentions are appropriate. Modification: stop to determine whether intentions have succeeded or whether they are impossible (singleminded commitment). while true do D := options(b,i); I := filter(b,d,i); while not empty(p) do a := first(p); execute(a); P := rest(p); 9 0 if not sound(p,b,i) then P := plan(b,i); while true do D := options(b,i); I := filter(b,d,i); while not empty(p)or succeeded(b,i) or impossible(b,i) do a := first(p); execute(a); P := rest(p); 9 0 if not sound(p,b,i) then P := plan(b,i); ; Intention Reconsideration Agent Control Loop Version Our agent gets to reconsider its intentions once every time around the outer control loop, i.e., when: it has completely executed a plan to achieve its current intentions; or it believes it has achieved its current intentions; or it believes its current intentions are no longer possible. This is limited in the way that it permits an agent to reconsider its intentions. Modification: Reconsider intentions after executing every action. while true do D := options(b,i); I := filter(b,d,i); while not empty(p) or succeeded(b,i) or impossible(b,i) do a := first(p); execute(a); P := rest(p); 9 0 if not sound(p,b,i) then P := plan(b,i); ; while true do D := options(b,i); I := filter(b,d,i); while not empty(p) or succeeded(b,i) or impossible(b,i)</font> do a := first(p); execute(a); P := rest(p); 9 0 D := options(b,i); I := filter(b,d,i); if not sound(p,b,i) then P := plan(b,i); ;

Intention Reconsideration Agent Control Loop Version But intention reconsideration is costly! A dilemma: an agent that does not stop to reconsider its intentions sufficiently often will continue attempting to achieve its intentions even after it is clear that they cannot be achieved, or that there is no longer any reason for achieving them; an agent that constantly reconsiders its intentions may spend insufficient time actually working to achieve them, and hence runs the risk of never actually achieving them. Solution: incorporate an explicit meta-level control component, that decides whether or not to reconsider. while true do D := options(b,i); I := filter(b,d,i); while not empty(p) or succeeded(b,i) or impossible(b,i) do a := first(p); execute(a); P := rest(p); 9 0 D := options(b,i); I := filter(b,d,i); if not sound(p,b,i) then P := plan(b,i); ; while true do D := options(b,i); I := filter(b,d,i); while not empty(p) or succeeded(b,i) or impossible(b,i)</font> do a := first(p); execute(a); P := rest(p); 9 0 if reconsider(b,i) then D := options(b,i); I := filter(b,d,i); if not sound(p,b,i) then P := plan(b,i); 9 0 Optimal Intention Reconsideration Kinny and Georgeff's experimentally investigated effectiveness of intention reconsideration strategies. Two different types of reconsideration strategy were used: bold agents never pause to reconsider intentions, and cautious agents stop to reconsider after every action. Dynamism in the environment is represented by the rate of world change, ɣ. Kinny and Georgeff's Results If ɣ is low (i.e., the environment does not change quickly), then bold agents do well compared to cautious ones. This is because cautious ones waste time reconsidering their commitments while bold agents are busy working towards - and achieving - their intentions. If ɣ is high (i.e., the environment changes frequently), then cautious agents tend to outperform bold agents. This is because they are able to recognize when intentions are doomed, and also to take advantage of serendipitous situations and new opportunities when they arise.

The representation/reasoning problem How to symbolically represent information about complex real-world entities and processes. How to translate the perceived world into an accurate, adequate symbolic description, in time for that description to be useful vision, speech recognition, learning. How to get agents to reason with this information in time for the results to be useful knowledge representation, automated reasoning, planning. During computation, the dynamic world might change and thus the solution not valid anymore! How to represent temporal information, e.g., how a situation changes over time? Critizism of Symbolic AI There are many unsolved problems associated with symbolic AI. Most of what people do in their day to day lives is not problem-solving or planning, but rather it is routine activity in a relatively benign, but certainly dynamic, world. (Brooks, 99) These problems have led some researchers to question the viability of the whole paradigm, and to the development of reactive architectures. Although united by a belief that the assumptions underpinning mainstream AI are in some sense wrong, reactive agent researchers use many different techniques. Brooks' Design Criterias Reactive Architecture An agent must cope appropriately and in a timely fashion with changes in its environment. An agent should be robust with respect to its environment. An agent should be able to maintain multiple goals and switch between them. An agent should do something, it should have some purpose in being.

Brooks - Behaviour Languages Brooks has put forward three theses: Intelligent behaviour can be generated without explicit representations of the kind that symbolic AI proposes. Intelligent behaviour can be generated without explicit abstract reasoning of the kind that symbolic AI proposes. Intelligence is an emergent property of certain complex systems. Brooks - Key Ideas Situatedness and embodiment: The world is its own best model and it gives the agent a firm ground for its reasoning. Intelligence and emergence: Intelligent behaviour arises as a result of an agent's interaction with its environment. Also, intelligence is in the eye of the beholder ; it is not an innate, isolated property. The Subsumption Architecture (/) To illustrate his ideas, Brooks built some robots based on his subsumption architecture. A subsumption architecture is a hierarchy of task-accomplishing behaviours. Each behaviour is a rather simple rule-like structure. Each behaviour competes with others to exercise control over the agent. The Subsumption Architecture Traditional decomposition into functional modules: Decomposition based on task achieving behaviors: 9 0

The Subsumption Architecture The Subsumption Architecture Lower layers represent more primitive kinds of behaviour, (such as avoiding obstacles), and have precedence over layers further up the hierarchy. The resulting systems are, in terms of the amount of computation they do, extremely simple. Some of the robots do tasks that would be impressive if they were accomplished by symbolic AI systems. Example of Reactive Architecture State Machines

Finite State Machines Example: light switch It is a machine that is in one state among a finite number of states A finite state machine is defined by A finite number of states S A finite number of transitions T between states An initial state s₀ S The current state s S Example: ambulance Benefits and drawbacks Benefits: Simple Predictable Flexible Fast Verifiable / Provable Drawbacks: Complexity increase faster than the number of states and transitions

State charts FSM Calculator Extend Finite State-Machines: Hierarchical states Concurent States Data flow 9 0 State Chart Calculator Advantages Of Reactive Systems Simplicity, i.e. modules have high expressiveness Computational tractability Robustness against failure, i.e. possibility of modeling redundancies Overall behavior emerges from interactions

Problems With Reactive Systems The local environment must contain enough information to make a decision. Hard to take non-local information into account. Behavior emerges from interactions How to engineer the system in the general case? How to model long-term decisions? How to implemented varying goals? Hard to engineer, especially large systems with many layers that interacts. Hybrid Architecture Hybrid Architecture A hybrid system is neither a completely deliberative nor completely reactive approach. An obvious approach is to build an agent out of two (or more) subsystems: a deliberative one, containing a symbolic world model, which develops plans and makes decisions in the way proposed by symbolic AI; and a reactive one, which is capable of reacting to events without complex reasoning. Often, the reactive component is given some kind of precedence over the deliberative one. This kind of structuring leads naturally to the idea of a layered architecture. Hybrid Architecture In a layered architecture, an agent's control subsystems are arranged into a hierarchy, with higher layers dealing with information at increasing levels of abstraction. A key problem in layered architectures is what kind of control framework to embed the agent's subsystems in, to manage the interactions between the various layers. Horizontal layering - Layers are each directly connected to the sensory input and action output. In effect, each layer itself acts like an agent, producing suggestions as to what action to perform. Vertical layering - Sensory input and action output are each dealt with by at most one layer each.

Hybrid Architecture Hybrid Architecture Agent Architectures Summary Summary Originally (9-9), pretty much all agents designed within AI were symbolic reasoning agents Its purest expression proposes that agents use explicit logical reasoning in order to decide what to do Problems with symbolic reasoning led to a reaction against this the so-called reactive agents movement, 9 present From 990-present, a number of alternatives proposed: hybrid architectures, which attempt to combine the best of reasoning and reactive architectures 0

Deliberative Architectures Properties Internal state (using symbolic representation) Search-based decision making Goal directed Benefits Nice and clear (logics) semantics Easy to analyze by proving properties Problems Can t react in a timely manner to events that requires immediate actions. Intractable algorithms. Hard to create a symbolic representation from continuous sensor data. The anchoring problem. Reactive Agent Architectures Properties No explicit world model Rule-based decision making Benefits Efficient Robust Problems The local environment must contain enough information to make a decision. Easy to build small agents, hard to build agents with many behaviors or rules. Emergent behavior. Hybrid Agent Architectures Properties Tries to combine the good parts of both reactive and deliberative architectures. Usually layered architectures. Benefits Attacks the problem on different abstraction levels. Has the benefits of both architecture types. Problems Hard do combine the different parts.