throbber
Software Agents: An Overview
`
`Hyacinth S. Nwana
`
`Intelligent Systems Research
`
`Advanced Applications & Technology Department
`BT Laboratories, Martlesham Heath
`
`Ipswich, Suffolk, IP5 7RE, U.K.
`
`e-mail: hyacinth@info.bt.co.uk
`
`Tel: (+44 1 473) 605457
`
`fax: (+441473) 642459
`
`Knowledge Engineering Review, Vol. 11, No 3, pp. 205-244, October/November 1996.
`
`© Cambridge University Press, 1996
`
`Abstract
`
`Agent software is a rapidly developing area of research. However, the overuse of the word
`‘agent’ has tended to mask the fact that, in reality, there is a truly heterogeneous body of
`research being carried out under this banner. This overview paper presents a typology of
`agents. Next, it places agents in context, defines them and then goes on, inter alia, to
`overview critically the rationales, hypotheses, goals, challenges and state-of-the-art
`demonstrators of the various agent types in our typology. Hence, it attempts to make explicit
`much of what is usually implicit in the agents literature. It also proceeds to overview some
`other general issues which pertain to all the types of agents in the typology. This paper
`largely reviews software agents, and it also contains some strong opinions that are not
`necessarily widely accepted by the agent community.
`
`EMC 1120
`
`
`
`
`
`
`
`
`1 Introduction
`
`The main goal of this paper is to overview the rapidly evolving area of software agents. The
`overuse of the word ‘agent’ has tended to mask the fact that, in reality, there is a truly
`heterogeneous body of research being carried out under this banner. This paper places agents
`in context, defines them and then goes on, inter alia, to overview critically the rationales,
`hypotheses, goals, challenges and state-of-the-art demonstrators/prototypes of the various
`agent types currently under investigation. It also proceeds to overview some other general
`issues which pertain to all the classes of agents identified. Finally, it speculates as to the
`future of the agents research in the short, medium and long terms. This paper largely reviews
`software agents. Since we are overviewing a broad range of agent types in this paper, we do
`not provide a definition of agenthood at this juncture. We defer such issues until Section 4
`where we present our typology of agents.
`
`The breakdown of the paper is as follows. Section 2 notes the situation of smart agents
`research in the broad field of Distributed Artificial Intelligence (DAI) and provides a brief
`history. Section 3 identifies the scope of applicability of agents research and notes that there
`is a diverse range of interested parties. Before the core critical overview of the agent typology
`of Section 5, Section 4 provides our view of what smart agents are; it also identifies the
`different types of agents which fall under the ‘agents’ banner and warns that fruly smart or
`intelligent agents do not yet exist! They are still very much the aspiration of agent
`researchers. Section 6 overviews some more general issues on agents and and speculates
`briefly towards the future of agents in the short, medium and long terms. Section 7 concludes
`the paper.
`
`2 Software Agents: History and the Context of this Paper
`
`Software agents have evolved from multi-agent systems (MAS), which in turn form one of
`three broad areas which fall under DAI, the other two being Distributed Problem Solving
`(DPS) and Parallel Al (PAI). Hence, as with multi-agent systems, they inherit many of DAI’s
`motivations, goals and potential benefits. For example, thanks to distributed computing,
`software agents inherit DAI’s potential benefits including modularity, speed (due to
`parallelism) and reliability (due to redundancy). It also inherits those due to Al such as
`operation at the knowledge level, easier maintenance, reusability and platform independence
`(Huhns & Singh, 1994).
`
`The concept of an agent, in the context of this paper, can be traced back to the early days of
`research into DAI in the 1970s - indeed, to Carl Hewitt’s concurrent Actor model (Hewitt,
`1977). In this model, Hewitt proposed the concept of a self-contained, interactive and
`concurrently-executing object which he termed ‘actor’. This object had some encapsulated
`internal state and could respond to messages from other similar objects: an actor
`
`“is a computational agent which has a mail address and a behaviour. Actors communicate by
`message-passing and carry out their actions concurrently” (Hewitt, 1977, p. 131).
`
`Broadly, for the purposes of this paper, we split the research on agents into two main strands:
`the first spanning the period 1977 to the current day, and the second from 1990 to the current
`day too. Strand 1 work on smart agents, which begun in the late 1970s and all through the
`1980s to the current day, concentrated mainly on deliberative-type agents with symbolic
`internal models; later in this paper, we type these as collaborative agents. A deliberative agent
`is
`
`
`
`
`
`
`
`
`“one that possesses an explicitly represented, symbolic model of the world, and in which
`decisions (for example about what actions to perform) are made via symbolic reasoning”
`(Wooldridge, 1995, p. 42).
`
`Initially, strand 1 “work concentrated on macro issues such as the interaction and
`communication between agents, the decomposition and distribution of tasks, coordination and
`cooperation, conflict resolution via negotiation, etc. Their goal was to specify, analyse, design
`and integrate systems comprising of multiple collaborative agents. These resulted in classic
`systems and work such as the actor model (Hewitt, 1977), MACE (Gasser ez al., 1987),
`DVMT (Lesser & Corkill, 1981), MICE (Durfee & Montgomery, 1989), MCS (Doran ef al.,
`1990), the contract network coordination approach (Smith, 1980; Davis & Smith, 1983),
`MAS/DAI planning and game theories (Rosenschein, 1985; Zlotkin & Rosenschein, 1989;
`Rosenschein & Zlotkin, 1994). These ‘macro’ aspects of agents as Gasser (1991) terms them,
`emphasises the society of agents over individual agents, while micro issues relate specifically
`to the latter. In any case, such issues are well summarised in Chaib-draa ez al. (1992), Bond &
`Gasser (1988) and Gasser & Huhns (1989). More recent work under this strand include
`TAEMS (Decker & Lesser, 1993; Decker, 1995) DRESUN (Carver ef al., 1991; Carver &
`Lesser, 1995), VDT (Levitt et al., 1994), ARCHON (Wittig, 1992; Jennings ef al., 1995),
`Note that game theoretic work should arguably not be classed as a macro approach; it may,
`indeed, lie more towards the micro end of the spectrum.
`
`In addition to the macro issues, strand 1 work has also been characterised by research and
`development into theoretical, architectural and language issues. In fact, such works evolve
`naturally, though not exclusively, from the investigation of the macro issues. This is well
`summarised in Wooldridge & Jennings (1995a), and in the edited collections of papers:
`Wooldridge & Jennings (1995b) and Wooldridge ez al. (1996).
`
`However, since 1990 or thereabouts, there has evidently been another distinct strand to the
`research and development work on software agents - the range of agent types being
`investigated is now much broader. Thus, this paper is complementary to Wooldridge &
`Jennings’ (1995a) by placing emphasis on this strand although, naturally, there is some
`overlap, i.e. it overviews the broadening typology of agents being investigated by agent
`researchers. Some cynics may argue that this strand arises because everybody is now calling
`everything an agent, thereby resulting, inevitably, in such broadness. We sympathise with this
`viewpoint; indeed, it is a key motivation for this paper - to overview the extensive work that
`goes under the ‘agent’ banner. Essentially, our point is that in addition to investigating macro
`issues and others such as theories, architectures, languages, there has also been an
`unmistakable trend towards the investigation of a broader range of agent types or classes. The
`context of this paper is summarised in Table 1 below.
`
`Strand 1 Macro issues Bond & Gasser (1988)
`Gasser & Huhns (1989)
`Chaib-draa et al. (1992)
`Gasser et al. (1995)
`
`
`
`
`
`
`
`
`
`Theories, architectures & Wooldridge & Jennings
`languages (1995a, 1995b)
`
`Wooldridge et al. (1996)
`
`Strand 2 Diversification in the types of | This paper covers this!
`agents being investigated
`
`Table 1 - Brief History of Software Agents and the Context of this Paper
`
`3 Who are Investigating Software Agents for What and Why?
`
`We eschew answering this question in a futuristic sense in favour of providing a flavour of
`the scope of the research and development underway in universities and industrial
`organisations. The range of firms and universities actively pursuing agent technology is quite
`broad and the list is ever growing. It includes small non-household names (e.g. Icon, Edify
`and Verity), medium-size organisations (e.g. Carnegie Mellon University (CMU), General
`Magic, Massachusetts Institute of Technology (MIT), the University of London) and the real
`big multinationals (e.g. Alcatel, Apple, AT&T, BT, Daimler-Benz, DEC, HP, IBM, Lotus,
`Microsoft, Oracle, Sharp). Clearly, these companies are by no means completely
`homogeneous, particularly if others such as Reuters and Dow Jones are appended to this list.
`
`The scope of the applications being investigated and/or developed is arguably more
`impressive: it really does range from the mundane (strictly speaking, not agent applications)
`to the moderately ‘smart’. Lotus, for example, will be providing a scripting language in their
`forthcoming version of Notes which would allow users to write their own individual scripts in
`order to manage their e-mails, calendars, and set up meetings, etc. This is based on the view
`that most people do not really need ‘smart’ agents. Towards the smart end of the spectrum are
`the likes of Sycara’s (1995) visitor hosting system at CMU. In this system, “task-specific”
`and “information-specific” agents cooperate in order to create and manage a visitor’s
`schedule to CMU. To achieve this, first, the agents access other on-line information resources
`in order to determine the visitor’s areas of interest, name and organisation and resolve the
`inevitable inconsistencies and ambiguities. More information is later gamered including the
`visitor’s status in her organisation and projects she is working on. Second, using the
`information gathered on the visitor, they retrieve information (e.g. rank, telephone number
`and e-mail address) from personnel databases in order to determine appropriate attendees (i.e.
`faculty). Third, the visitor hosting agent selects an initial list of faculty to be contacted,
`composes messages which it dispatches to the calendar agents of these faculties, asking
`whether they are willing to meet this visitor and at what time. If the faculty does not have a
`calendar agent, an e-mail is composed and despatched. Fourth, the responses are collated.
`Fifth, the visitor hosting agent creates the schedule for the visitor which involves booking
`rooms for the various appointments with faculty members. Naturally, the system interacts
`with the human organiser and seeks her confirmation, refutations, suggestions and advice.
`
`Most would agree that this demonstrator is pretty smart, but its ‘smartness’ derives from the
`fact that the ‘value’ gained from individual stand-alone agents coordinating their actions by
`working in cooperation, is greater than that gained from any individual agent. This is where
`agents really come into their element.
`
`More examples of applications are described later but application domains in which agent
`solutions are being applied to or investigated include workflow management, network
`management, air-traffic control, business process re-engineering, data mining, information
`
`4
`
`
`
`
`
`
`
`
`retrieval/management, electronic commerce, education, personal digital assistants (PDAs), e-
`mail, digital libraries, command and control, smart databases, scheduling/diary management,
`etc. Indeed, as Guilfoyle (1995) notes
`
`“in 10 years time most new IT development will be affected, and many consumer products
`will contain embedded agent-based systems”.
`
`The potential of agent technology has been much hailed, e.g. a 1994 report of Ovum’s, a UK-
`based market research company, is titled “Intelligent agents: the new revolution in software”
`(Ovum, 1994). The same firm has apparently predicted that the market sector totals for agent
`software and products for USA and Europe will be worth at least $3.9 billion by the year
`2000 in contrast to an estimated 1995 figure of $476 million (computed from figures quoted
`in Guilfoyle, 1995). Such predictions are perhaps overly optimistic.
`
`Moreover, as King (1995) notes telecommunications companies like BT and AT&T are
`working towards incorporating smart agents into their vast networks; entertainment, e.g.
`television, and retail firms would like to exploit agents to capture our program viewing and
`buying patterns respectively; computer firms are building the software and hardware tools
`and interfaces which would harbour numerous agents; Reinhardt (1994) reports that IBM
`plans (or may have already done) to launch a system, the IBM Communications Systems
`(ICS), which would use agents to deliver messages to mobile users in the form they want it,
`be it fax, speech or text, depending on the equipment the user is carrying at the time, e.g. a
`PDA, a portable PC or a mobile phone. At BT Laboratories, we have also carried out some
`agent-related research on a similar idea where the message could be routed to the nearest
`local device, which may or may not belong to the intended recipient of the message. In this
`case, the recipient’s agent negotiates with other agents for permission to use their facilities,
`and takes into consideration issues such as costs and bandwidth in such negotiations (Titmuss
`et al., 1996). At MIT, Pattie Maes’ group is investigating agents that can match buyers to
`sellers or which can build coalitions of people with similar interests. They are also drawing
`from biological evolution theory to implement demonstrators in which some user only
`possesses the ‘fittest” agents: agents would ‘reproduce’ and only the fittest of them will
`survive to serve their masters; the weaker ones would be purged.
`
`It is important to note that most of these are still demonstrators only: converting them into
`real usable applications would provide even greater challenges, some of which have been
`anticipated but, currently, many are unforeseen. The essential message of this section is that
`agents are here to stay, not least because of their diversity, their wide range of applicability
`and the broad spectrum of companies investing in them. As we move further and further into
`the information age, any information-based organisation which does not invest in agent
`technology may be committing commercial hara-kiri.
`
`4 What is an agent?
`
`We have as much chance of agreeing on a consensus definition for the word ‘agent’ as Al
`researchers have of arriving at one for ‘artificial intelligence’ itself - nil! Recent postings to
`the software agents mailing list (agents@sunlabs.eng.Sun.COM) prove this. Indeed, in a
`couple of these postings, some propounded the introduction of a financial and/or legal aspect
`to the definition of agents, much to the derision of others. There are at least two reasons why
`it is so difficult to define precisely what agents are. Firstly, agent researchers do not ‘own’
`this term in the same way as fuzzy logicians/Al researchers, for example, own the term ‘fuzzy
`logic’ - it is one that is used widely in everyday parlance as in travel agents, estate agents, etc.
`
`
`
`
`
`
`
`
`Secondly, even within the software fraterity, the word ‘agent’ is really an umbrella term for
`a heterogeneous body of research and development. The response of some agent researchers
`to this lack of definition has been to invent yet some more synonyms, and it is arguable if
`these solve anything or just further add to the confusion. So we now have synonyms
`including knowbots (i.e. knowledge-based robots), softbots (software robot), taskbots (task-
`based robots), userbots, robots, personal agents, autonomous agents and personal assistants.
`To be fair, there are some good reasons for having such synonyms. Firstly, agents come in
`many physical guises: for example, those that inhabit the physical world, some factory say,
`are called robots; those that inhabit vast computer networks are sometimes referred to as
`softbots; those that perform specific tasks are sometimes called taskbots; and autonomous
`agents refer typically to mobile agents or robots which operate in dynamic and uncertain
`environments. Secondly, agents can play many roles, hence personal assistants or knowbots,
`which have expert knowledge in some specific domain. Furthermore, due to the multiplicity
`of roles that agents can play, there is now a plethora of adjectives which precede the word
`‘agent’, as in the following drawn only from King’s (1995) paper: search agents, report
`agents, presentation agents, navigation agents, role-playing agents, management agents,
`search and retrieval agents, domain-specific agents, development agents, analysis and design
`agents, testing agents, packaging agents and help agents. King’s paper is futuristic and
`provides a role-specific classification of agents, and so such rampant metaphorical use of the
`word is fine. But there is also another view that it gives currency to others to refer to just
`about anything as an agent. For example, he considers “print monitors for open printing, fax
`redial, and others” (p. 18) as agents, albeit simple ones. As Wayner & Joch (1995) write,
`somewhat facetiously,
`
`“the metaphor has become so pervasive that we’re waiting for some enterprising company to
`advertise its computer switches as empowerment agents™ (p. 95).
`
`We tend to use the word slightly more carefully and selectively as we explain later.
`
`When we really have to, we define an agent as referring to a component of software and/or
`hardware which is capable of acting exactingly in order to accomplish tasks on behalf of its
`user. Given a choice, we would rather say it is an umbrella term, meta-term or class, which
`covers a range of other more specific agent types, and then go on to list and define what these
`other agent types are. This way, we reduce the chances of getting into the usual prolonged
`philosophical and sterile arguments which usually proceed the former definition, when any
`old software is conceivably recastable as agent-based software.
`
`4.1 A Typology of Agents
`
`This section attempts to place existing agents into different agent classes, i.e. its goal is to
`investigate a typology of agents. A typology refers to the study of types of entities. There are
`several dimensions to classify existing software agents.
`
`Firstly, agents may be classified by their mobility, i.e. by their ability to move around some
`network. This yields the classes of static or mobile agents.
`
`Secondly, they may be classed as either deliberative or reactive. Deliberative agents derive
`from the deliberative thinking paradigm: the agents possess an internal symbolic, reasoning
`model and they engage in planning and negotiation in order to achieve coordination with
`other agents. Work on reactive agents originate from research carried out by Brooks (1986)
`and Agre & Chapman (1987). These agents on the contrary do not have any internal,
`symbolic models of their environment, and they act using a stimulus/response type of
`
`
`
`
`
`
`
`
`behaviour by responding to the present state of the environment in which they are embedded
`(Ferber, 1994). Indeed, Brooks has argued that intelligent behaviour can be realised without
`the sort of explicit, symbolic representations of traditional Al (Brooks, 1991b).
`
`Thirdly, agents may be classified along several ideal and primary attributes which agents
`should exhibit. At BT Labs, we have identified a minimal list of three: autonomy, learning
`and cooperation. We appreciate that any such list is contentious, but it is no more or no less
`so than any other proposal. Hence, we are not claiming that this is a necessary or sufficient
`set. Autonomy refers to the principle that agents can operate on their own without the need for
`human guidance, even though this would sometimes be invaluable. Hence agents have
`individual internal states and goals, and they act in such a manner as to meet its goals on
`behalf of its user. A key element of their autonomy is their proactiveness, i.e. their ability to
`‘take the initiative’ rather than acting simply in response to their environment (Wooldridge &
`Jennings, 1995a). Cooperation with other agents is paramount: it is the raison d’étre for
`having multiple agents in the first place in contrast to having just one. In order to cooperate,
`agents need to possess a social ability, i.e. the ability to interact with other agents and
`possibly humans via some communication language (Wooldridge & Jennings, 1995a).
`Having said this, it is possible for agents to coordinate their actions without cooperation
`(Nwana ef al., 1996). Lastly, for agent systems to be truly ‘smart’, they would have to learn
`as they react and/or interact with their external environment. In our view, agents are (or
`should be) disembodied bits of ‘intelligence’. Though, we will not attempt to define what
`intelligence is, we maintain that a key attribute of any intelligent being is its ability to learn.
`The learning may also take the form of increased performance over time. We use these three
`minimal characteristics in Figure 1 to derive four types of agents to include in our typology:
`collaborative agents, collaborative learning agents, interface agents and truly smart agents.
`
`Smart
`Agents
`
`Collaborative
`Learning Agents
`
`Co0
`
`oS
`
`Autonomous
`
`Collaborative
`
`Agents Interface
`
`Agents
`
`Figure 1 - A Part View of an Agent Typology
`
`We emphasise that these distinctions are not definitive. For example, with collaborative
`agents, there is more emphasis on cooperation and autonomy than on learning; hence, we do
`not imply that collaborative agents never learn. Likewise, for interface agents, there is more
`emphasis on autonomy and learning than on cooperation. We do not consider anything else
`which lie outside the ‘intersecting areas’ to be agents. For example, most expert systems are
`largely ‘autonomous’ but, typically, they do not cooperate or learn. Ideally, in our view,
`agents should do all three equally well, but this is the aspiration rather than the reality. Truly
`smart agents do not yet exist: indeed, as Maes (1995a) notes “current commercially available
`agents barely justify the name”, yet alone the adjective ‘intelligent’. Foner (1993) is even
`more incandescent; though he wrote this in 1993, it still applies today:
`
`
`
`
`
`
`
`
`“... 1 find little justification for most of the commercial offerings that call themselves agents.
`Most of them tend to excessively anthromomorphize the software, and then conclude that it
`must be an agent because of that very anthropomorphization, while simultaneously failing to
`provide any sort of discourse or “social contract” between the user and the agent. Most are
`barely autonomous, unless a regularly-scheduled batch job counts. Many do not degrade
`gracefully, and therefore do not inspire enough trust to justify more than trivial delegation
`and its concomitant risks” (Foner, 1993, 39/40).
`
`In effect, like Foner, we assert that the arguments for most commercial offerings being agents
`suffer from the logical fallacy of petitio principii - they assume what they are trying to prove
`- or they are circular arguments. Indeed, this applies to other ‘agents’ in the literature.
`
`In principle, by combining the two constructs so far (i.e. static/mobile and
`reactive/deliberative) in conjunction with the agent types identified (i.e. collaborative agents,
`interface agents, etc.), we could have static deliberative collaborative agents, mobile reactive
`collaborative agents, static deliberative interface agents, mobile reactive interface agents,
`etc. But these categories, though quite a mouthful, may also be necessary to further classify
`existing agents. For example, Lashkari er al. (1994) presented a paper at AAAI on
`‘Collaborative interface agents’ which, in our classification, translates to static collaborative
`interface agents.
`
`Fourthly, agents may sometimes be classified by their roles (preferably, if the roles are major
`ones), e.g. world wide web (WWW) information agents. This category of agents usually
`exploits internet search engines such as WebCrawlers, Lycos and Spiders. Essentially, they
`help manage the vast amount of information in wide area networks like the internet. We refer
`to these class of agents in this paper as information or internet agents. Again, information
`agents may be static, mobile or deliberative. Clearly, it is also pointless making classes of
`other minor roles as in report agents, presentation agents, analysis and design agents, testing
`agents, packaging agents and help agents - or else, the list of classes will be large.
`
`Fifthly, we have also included the category of hybrid agents which combine of two or more
`agent philosophies in a single agent.
`
`There are other attributes of agents which we consider secondary to those already mentioned.
`For example, is an agent versatile (i.e. does it have many goals or does it engage in a variety
`of tasks)? Is an agent benevolent or non-helpful, antagonistic or altruistic? Does an agent lie
`knowingly or is it always truthful (this attribute is termed veracity)? Can you trust the agent
`enough to (risk) delegate tasks to it? Is it temporally continuous? Does it degrade gracefully
`in contrast to failing drastically at the boundaries? Perhaps unbelievably, some researchers
`are also attributing emotional attitudes to agents - do they get ‘fed up’ being asked to do the
`same thing time and time again? What role does emotion have in constructing believable
`agents (Bates, 1994)? Some agents are also imbued with mentalistic attitudes or notions such
`as beliefs, desires and intentions - referred to typically as BDI agents (Rao & Georgeff,
`1995). Such attributes as these provide for a stronger definition of agenthood.
`
`In essence, agents exist in a truly multi-dimensional space, which is why we have not used a
`two or three-dimensional matrix to classify them - this would be incomplete and inaccurate.
`However, for the sake of clarity of understanding, we have ‘collapsed’ this multi-dimensional
`space into a single list. In order to carry out such an audacious move, we have made use of
`our knowledge of the agents we know are currently ‘out there’ and what we wish to aspire to.
`Therefore, the ensuing list is to some degree arbitrary, but we believe these types cover most
`of the agent types being investigated currently. We have left out collaborative learning
`agents, see Figure 1, on the grounds that we do not know of the existence ‘out there’ of any
`
`
`
`
`
`
`
`
`such agents which collaborate and learn, but are not autonomous. Hence, we identify seven
`types of agents:
`
`. Collaborative agents
`
`. Interface agents
`
`. Mobile agents
`
`. Information/Internet agents
`. Reactive agents
`
`. Hybrid agents
`
`. Smart Agents
`
`There are some applications which combine agents from two or more of these categories, and
`we refer to these as heferogeneous agent systems. Such applications already exist even
`though they are relatively few. However, we also overview briefly such systems in the next
`section.
`
`Another issue of note (for completeness sake) is that agents need not be benevolent to one
`another. It is quite possible that agents may be in competition with one another, or perhaps
`quite antagonistic towards each other. However, we view competitive agents as potential
`subclasses of all these types. That is, it is possible to have competitive collaborative-type
`agents, competitive interface agents, competitive information agents, etc.
`
`4.2 A Critique of Our Typology
`
`As with our definition of agenthood, our typology of agents is bound to be contentious. Two
`official reviewers of this paper all took issue with it, but their suggestions are, in our opinion,
`either more debatable or unclear. One reviewer, reviewer 1, claimed that we have confused
`agents that are defined by what they do (information agents, interface agents and
`collaborative agents), and other types for the sort of technology that underpins these agents
`(mobile agents, reactive agents, hybrid agents). Thus, he/she would have preferred a 2-
`dimensional classification. The second reviewer mentioned a similar point but alluded to a
`different classification. To a large degree, we disagree with this criticism, though not fully.
`We believe we had already attempted, perhaps unsuccessfully, to pre-empt this criticism.
`Firstly, we would not group information agents, interface agents and collaborative agents in
`one large group: in our view, as we explained earlier, collaborative agents and interface
`agents are defined by what they are, while information agents are defined by what they do.
`Secondly, we do not agree fully with the assertion that mobile agents, reactive agents and
`hybrid agents are all underlying technologies for implementing the former classes. To this
`reviewer, interface agents are collaborative agents implemented using reactive technology!
`We simply disagree with this viewpoint. As we explain later in the paper, reactive agents for
`example, have a distinct philosophy, hypothesis, etc. which make it stand out from the rest.
`We have surveyed the area of technologies for building software agent systems in another
`paper, Nwana & Wooldridge (1996). However, we agree with the general thrust of the
`argument to some degree; for example, we fully accept the reviewers’ viewpoint that mobility
`is not a necessary condition for agenthood - a point which is implicit in Section 4.1, and
`which we explain later. Thirdly, we address such issues when we discuss the individual types
`more fully in the rest of the paper. Fourthly, we point out, explicitly, in Section 4.1 that
`agents exist in a truly multi-dimensional space, and that for the sake of clarity of
`
`
`
`
`
`
`
`
`understanding, we have collapsed this multi-dimensional space into a single list. To produce
`this list, we used a set of criteria which included inate properties of agents which we would
`prefer to see (autonomy, cooperation, learning), other constructs (static/mobile,
`deliberative/reactive), major roles (as in information agents) and whether they are hybrid or
`heterogencous. In a previous version of this paper where we had a more hierarchical
`breakdown, it turned out to be less clear. Fifthly, other typologies in the literature are equally
`as contentious. For example, Wooldridge & Jennings (1995a) broadly classify agents into the
`following: gopher agents, service performing agents and proactive agents. We believe this is
`too general and simplistic a classification. It is for these reasons that we opted for such a ‘flat®
`breakdown. To be fair, apart from the typology, these two reviewers were very
`complementary about the paper.
`
`In conclusion, our typology is not without its critics (but so are all others), but as reviewer 1
`pointed out “while T agree that most agents in the literature can be categorised into these
`types, I think the types are themselves faulty”. In this paper, we have deliberately traded in
`accuracy for clarity. Our typology highlights the key contexts in which the word ‘agent’ is
`used in the software literature.
`
`4.3 What Agents are Not
`
`In general, we have already noted that a software component which does not fall in one of the
`intersecting areas of Figure 1 does not count as an agent. In any case, before the word ‘agent’
`came into vogue within the computing/Al fraternity, Minsky, in his Society of Mind (1985),
`had already used it to formulate his theory of human intelligence. However, Minsky used it to
`refer to much more basic entities:
`
`«. to explain the mind, we have to show how minds are built from mindless stuff, from parts
`that are much smaller and simpler than anything we’d consider smart... But what could those
`simpler particles be - the “agents” that compose our minds? This is the subject of our
`book...” (Minsky, 1985, 18).
`
`Clearly, Minsky’s use of the word ‘agent’ is quite distinct from its use in this paper.
`
`Furthermore, as noted earlier, expert systems do not meet the preconditions of agenthood, as
`do most knowledge-based system applications. Modules in distributed computing
`applications do not constitute agents either as Huhns & Singh (1994) explain. First, such
`modules are rarely ‘smart’, and hence much less robust than agents are (or should be); they
`also do not degrade gracefully. Second, in agent-based systems generally, the communication
`involves involves high-level mess

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket