Introduction multiagent systems pdf




















At some point, the view of the normative system as a self contained logical system is not viable anymore. Challenge 8 Tools for agents to voluntarily give up some norm autonomy by allowing auto- mated norm processing in agent acting and decision making. In many examples, the autonomy of the agent must be adjusted to the context.

In general avatars are graphical representations of users of a system and can be seen as interface agents. Avatars living in Second life are interface agents for human players but also increasingly for autonomous agents.

Consider the example above, where new abilities like dancing are automatically added to the avatar. It is possible to envisage a scenario where avatars are partially programmed to take autonomous decisions when the player is off-line.

Among these decisions is whether to comply with norms of the community the avatar is acting into. Note that these mechanisms are useful not only when the avatar is acting autonomously on behalf of its off-line owner, but also during the activity of the player.

In real life norms are often violated just by distraction, ignorance or by lack of resources and the violator does not gain anything by its deviant behavior. The same will eventually happen in virtual worlds, especially when norms to be respected will not be necessarily intuitive or similar to the ones of real world.

In these cases, the decision to conform to norms can be left to the avatar and the player can be relieved from this task.

The player could simply leave to its avatar the burden to conform to the norms by automatically disabling actions which are deviant with respect to the norms.

Challenge 9 Tools for conviviality. Since scenarios like Second life are aiming at people having pleasant social interactions, and norms may interfere with the goals of the players, the impact of norms on this dimension must be considered.

Norms should not constrain the freedom of participants too much and allow to avoid unpleasant behavior from other agents, but there is also a more subtle effect to be considered. Social interaction is regulated by social conventions, which can be modeled as a sort of institution.

In particular, in the sense that in social relations the player acquires new social powers which he does not have in his first life. The tools for conviviality should study social dependencies among players and indicate how these dependencies can be made less unbalanced by attributing more social powers to some players. Note that, as in the example about automatic learning of dancing abilities in Sect. Challenge 10 Tools for legal responsibility of the agents and their principals.

Nowadays, agents become subjects of human legislation. For example, it is debated if agents have responsibilities beyond the ones attributed to their owner, or if agents can be really attributed mental states which are to be taken into account in the attribution of respon- sibilities.

However, in scenarios like Second life, new questions arise. Participants accept the rules of the game and they should be made aware whether following the rules of some communities leads to infringement of real legislations.

The papers start to address the 10 challenges of the interactionist view, covering different dimensions of the normative multiagent systems field: the first one is a recent evolution of the traditional deontic logic approach, the second paper concerns the interaction between mental attitudes and norms, the third article addresses the ontological point of view on collectives of agents who are regulated by norms, and the fourth one provides an automatic translation from norm specification to a rule based implementation.

In this formal framework, a new conflict resolution mechanism satisfying the postulate is defined. The relevance of this result extends to reasoning with conditionals in other domains such as reasoning about prioritized default rules.

In this view the behavior of agents is the outcome of a rational balance among their possibly conflicting mental states and also normative external factors such as obligations. Thirdly, in the formal mechanism the concept of rule conversion is introduced, which allows to derive some motivations by using rules devised for inferring different motivations e.

Fourthly, the notion of agent type has been analyzed from the complexity point view. To deal with the side-effect problem, conflict resolution turns out to be very expensive in case of social agents. Finally, the paper is important also for its methodological choice: the formalism used to analyze the problem is Defeasible Logic, a non-monotonic logic—since it has to deal with conflicts—which is computationally feasible—linear complexity—so that it provides a realistic tool also to implement agents and not only to study their properties.

This paper fills this gap providing a first order formalization as well as an OWL i. Formally, a reification mechanism allows to have descriptions and situations in the same domain of quantification i. Gangemi criticizes the definition of normative multiagent system given in [3] see Sect.

Norms are a specification of a conceptualization whose objective is regulatory, and social agents use norms as constraints within their own plans. This view leads to the defini- tion of intentional normative collectives as knowledge communities unified by a plan that, in turn, is entrenched with norms according to the possible interaction between norms and plans.

Here knowledge communities are collections of agents unified by descriptions that are shared by the member agents. In this way specifications of norms can be given without having to also learn the imple- mentation language. The specification language provided extends previous proposals under several respects: not only dialogical actions are the object of norms, conditions and temporal situations related to norms are introduced, sanctions pointing out the authorized agents that can apply the punishments are defined, and norms activate other norms conditioned to their activation, deactivation, fulfillment or violation.

All these features of the specification of a norm find support in the implementation provided in Jess: the translation from the norm specification to the Jess rules is made via an automated translator.

The Jess system can be used by the agents who must be aware of the active norms as well as by the governance mechanism to become aware of the fulfilled norms and norms violations in order to apply the corresponding sanctions. References 1. Anderson, M. Machine ethics: Creating an ethical intelligent agent.

AI Magazine, 28 4 — Boella, G. Norm negotiation in online multi-player games. Knowledge and Information Systems. Introduction to normative multiagent systems. Dag- stuhl Seminar Proceedings Vol. Broersen, J. Goal generation in the BOID architec- ture.

Cognitive Science Quarterly, 2 3—4 , — Caire, P. Conviviality masks in multiagent systems. Castelfranchi, C. Modeling social action for AI agents. Artificial Intelligence, 1—2 , — Formalising the informal? Dynamic social order, bottom-up social control, and spontaneous normative relations. Journal of Applied Logic, 1 1—2 , 47— Giddens, A. The constitution of society. University of California Press.

Goble, L. Deontic logic and artificial normative systems. Lecture Notes in Computer Science Vol. Revised versions of papers presented in at the proceeding of the eighth international workshop on deontic logic in computer science DEON Journal of Applied Logic. Intelligent Agents: Multi-Agent Systems.

Encyclopedia of Bioinformatics and Computational Biology. A Collaborative Framework for Multiagent Systems. Agent Technol. View 1 excerpt. Agents and Multi-agent Coordination. Intelligent Information Processing. View 2 excerpts, cites background and methods. Towards a taxonomy of agents and multi-agent systems. SpringSim ' This paper consolidates existing research and provides a first step in the establishment of a comprehensive, multi-agent system taxonomy.

This taxonomy addresses characteristics of the overall agent … Expand. View 1 excerpt, references methods. View 1 excerpt, references background. Deals Among Rational Agents. Computer Science, Mathematics. Negotiation and Cooperation in Multi-Agent Environments. The architecture appears to be similar to that of hierarchical organization.

However in holonic architecture, cross tree interactions and overlapping or agents forming part of two different holons are allowed. In recent times, [30] had proved the superiority of the holonic multi-agent organization and how the autonomy of the agents increases when in a holonic group. The abstraction of the internal working of holons provides an increased degree of freedom when selecting the behaviour. A major disadvantage[] is the lack of a model or of a knowledge of the internal architecture of the holons.

This makes it difficult for other agents to predict the resulting actions of the holons. An example of Superholon with Nested Holons resembling the Hierarchical MAS c Coalitions In coalition architecture, a group of agents come together for a short time to increase the utility or performance of the individual agents in a group.

The coalition ceases to exist when the performance goal is achieved. Figure 5. The agents forming the coalition may have either a flat or a hierarchical architecture. Even when using a flat architecture, it is possible to have a leading agent to act as a representative of the coalition group.

The overlap of agents among coalition groups is allowed as this increases the common knowledge within the coalition group. It helps to build a belief model. However the use of overlap increases the complexity of computation of the negotiation strategy.

Coalition is difficult to maintain in a dynamic environment due to the shift in the performance of group. It may be necessary to regroup agents in order to maximize system performance. Theoretically, forming a single group consisting of all the agents in the environment will maximize the performance of the system. This is because each agent has access to all of the information and resources necessary to calculate the condition for optimal action.

It is impractical to form such a coalition due to restraints on the communication and resources. The number of coalition groups created must be minimized in order to reduce the cost associated with creating and dissolving a coalition group. The group formation may be pre-defined based on a threshold set for performance measure or alternatively could be evolved online. In reference [32], a coalition multi-agent architecture for urban traffic signal control was mentioned.

Each intersection was modelled as an agent with capability to decide the optimal green time required for that intersection. A distributed neuro-fuzzy inference engine was used to compute the level of cooperation required and the agents which must be grouped together. The coalition groups reorganize and regroup dynamically with respect to the changing traffic input pattern. The disadvantage is the increased computational complexity involved in creating ensembles or coalition groups.

The coalition MAS may have a better short term performance than the other agent architectures [33]. Srinivasan d Teams Team MAS architecture [34] is similar to coalition architecture in design except that the agents in a team work together to increase the overall performance of the group. Rather than each working as individual agents. The interactions of the agents within a team can be quite arbitrary and the goals or the roles assigned to each of the agents can vary with time based on improvements resulting from the team performance.

Reference [35] , deals with a team based multi-agent architecture having a partially observable environment. In other words, teams that cannot communicate with each other has been proposed for the Arthur's bar problem. Each team decides on whether to attend a bar by means of predictions based on the previous behavioural pattern and the crowd level experienced which is the reward or the utility received associated with the specific period of time.

Based on the observations made in [35], it can be concluded that a large team size is not beneficial under all conditions. Consequently some compromise must be made between the amount of information, number of agents in the team and the learning capabilities of the agents. Large teams offer a better visibility of the environment and larger amount of relevant information. However, learning or incorporating the experiences of individual agents into a single framework team is affected.

A smaller team size offers faster learning possibilities but result in sub-optimal performance due to a limited view of the environment. Tradeoffs between learning and performance need to be made in the selection of the optimal team size. This increases the computational cost much greater than that experienced in coalition multi-agent system architecture.

Figure 6. The team 1 and 3 can see each other but not teams 2 ,4 and vice versa. The internal behaviour of the agents and their roles are arbitrary and vary with teams even in homogeneous agent structure. Most of these architectures are inspired by behavioural patterns in governments, institutions and large industrial organizations. A detailed description of these architectures, their formation and characteristics may be found in [34]. Unnecessary or redundant intra-agent communication can increase the cost and cause instability.

Communication in a multi-agent system can be classified as two types. This is based on the architecture of the agent system and the type of information which is to be communicated between the agents. Based on the information communication between the agents [36], MAS can classified as local communication or message passing and network communication or Blackboard.

Mobile communication can be categorized into class of local communication. The term message passing is used to emphasize the direct communication between the agents. Figure 7. In this type of communication, the information flow is bidirectional. It creates a distributed architecture and it reduces the bottleneck caused by failure of central agents. This type of communication has been used in [25] [37] [38].

Agent-based blackboards, like federation systems, use grouping to manage the interactions between agents. There are significant differences between the federation agent architecture and the blackboard communication.

In blackboard communication, a group of agents share a data repository which is provided for efficient storage and retrieval of data actively shared between the agents. The repository can hold both the design data as well as the control knowledge that can be accessed by the agents. The type of data that can be accessed by an agent can be controlled through the use of a control shell.

This acts as a network interface that notifies the agent when relevant data is available in the repository. The control shell can be programmed to establish different types of coordination among the agents. Neither the agent groups nor the individual agents in the group need to be physically located near the blackboards. It is possible to establish communication between various groups by remote interface communication.

The major issue is due to the failure of blackboards. Srinivasan blackboard. However, it is possible to establish some redundancy and share resources between various blackboards. Figure 8a. Figure 8b.

Message Passing Communication between agents a b Fig. An Introduction to Multi-Agent Systems 13 5. This common framework is provided by the agent communication languages ACL. The elements that are of prime importance in the design of ACL were highlighted in [40]. They are the availability of the following. There are two popular approaches in the design of an agent communication language.

They are Procedural approach and Declarative approach. In Procedural approach, the communication between the agents is modelled as a sharing of the procedural directives. Procedural directives shared could be a part of how the specific agents does a specific task or the entire working of the agent itself.

Scripting languages are commonly used in the procedural approach. The major disadvantage of the procedural approach is the necessity of providing information on the recipient agent which in most cases is not known or only partially known. In case of making a wrong model assumption, the procedural approach may have a destructive effect on the performance of the agents. The second major concern is the merging of shared procedural scripts into a single large executable relevant script for the agent.

Owing to these disadvantages, the procedural approach is not the preferred method for designing agent communication language. In the declarative approach, the agent communication language is designed and based on the sharing of the declarative statements that specifies definitions, assumptions, assertions, axioms etc.

For the proper design of an ACL using a declarative approach, the declarative statements must be sufficiently expressive to encompass the use of a wide-variety of information. This would increase the scope of the agent system and also avoid the necessity of using specialized methods to pass certain functions. The declarative statements must be short and precise as to increase in the length affects the cost of communication and also the probability of information corruption.

The declarative statements also needs to be simple enough to avoid the use of a high level language. This means that the use of the language is not required to interpret the message passed. To meet all of the above requirements of the declarative approach based ACL, the ARPA knowledge sharing effort has devised an agent communication language to satisfy all requirements. The Inner language is responsible for the translation of the communication information into a logical form that is understood by all agents.

Srinivasan unambiguous and context-dependent. The receivers must derive from them the original logical form. For each linguistic representation, ACL maintains a large vocabulary repository.

A good ACL maintains this repository open-ended so that modifications and additions can be made to include increased functionality. Some of the information that can be encoded using KIF are simple data, constraints, negations, disjunctions, rules, meta-level information that aids in the final decision process.

It is not possible to use just the KIF for information exchange as much implicit information needs to be embedded. This is so that the receiving agent can interpret it with a minimal knowledge of the sender's structure. This is difficult to achieve as the packet size grows with the increase in embedded information.

To overcome this bottleneck, a high level language that utilizes the inner language as its backbone were introduced. These high-level languages make the information exchange independent of the content syntax and ontology. From the above example provided, it can be seen that KQML consists of three layers Figure 9 : A communication layer which indicates the origin and destination agent information and query label or identifier, a message layer that specifies the function to be performed eg: In the example provided, the first agent asks for the geographic location and the second agent replies to the query , and a content layer to provide the necessary details to perform the specific query.

A stream oriented approach is yet to be developed. The content layer specifies the language to be employed by the agent. It should be noted that agents can use different languages to communicate with each other and interpretation can be performed locally by higher level languages.

The uncertainty associated with the effects of a specific action on the environment and the dynamic variation in the environment as a result of the action of other agents makes multi-agent decision making a difficult task.

Usually the decision making in MAS is considered as a methodology to find a joint action or the equilibrium point which maximizes the reward received by every agent participating in decision making process. The decision making in MAS can be typically modelled as a game theoretic method.

Strategic game is the most simplest form of decision making process. Here every agent chooses its actions at the beginning of the game and the simultaneous execution of the chosen action by all agents. The payoff function is assumed to be predefined and known in the case of a simple strategic game. It is also assumed that the actions of all agents are observable and is a common knowledge available to all agents. A solution to a specific game is the prediction of the outcome of the game making the assumption that all participating agents are rational.

Srinivasan The prisoner's dilemma is a best case for demonstrating the application of game theory in decision making involving multiple agents. The prisoner's dilemma problem can be states as Two suspects involved in the same crime are interrogated independently.

If both the prisoner's confess to the crime, each of them will end up spending three years in prison. If only one of the prisoner confesses to the crime, the confessor is free while the other person will spend four years in prison. If they both do not confess to the crime, each will spend a year in prison. This scenario can be represented as a strategic game. Similar ordering could be performed by agent 2.

A payoff matrix that represents the particular preferences of the agents needs to be created. The reward or payoff received by each agent for choosing a specific joint action can be represented in a matrix format called as payoff matrix table.

The problem depicts a scenario where the agents can gain if they cooperate with each other but there is also a possibility to be free if a confession is made. The particular problem can be represented as a payoff matrix as shown in Figure In this case it can be seen that the solution "Not confess" is strictly dominated. By strictly dominated solution, it means that a specific action of an agent always increases the payoff of the agent irrespective of the other agents actions.

Payoff matrix in the Prisoner's Dilemma Problem However, there can be variations to the prisoner's dilemma problem by introducing an altruistic preference while still calculating the payoff of the actions.

Under this circumstance, there is no action strictly dominated by the other. An Introduction to Multi-Agent Systems 17 6. In the most idealistic conditions, where the components of the game are drawn randomly from a collection of populations or agents, a Nash equilibrium corresponds to a steady state value.

In a strategic game, there always exists a Nash equilibrium but it is not necessarily a unique solution. Examining the payoff matrix in Figure. If the payoff matrix could be modified to add value based on the trust or reward to create altruistic behaviour and feeling of indignation, then the subtle balance that exists shifts and the problem would have a multiple number of Nash equilibrium points as shown in Figure Modified Payoff matrix in the Prisoner's Dilemma Problem In this particular case, there are no dominated solution and multiple Nash equilibrium would exist.

To obtain a solution for the type of problem the coordination between the agents is an essential requirement. In this method, the strongly dominated actions are iteratively eliminated until no more actions are strictly dominated. The iterated elimination method assumes that all agents are rational and it would not choose a strictly dominated solution.

This method is weaker than the Nash equilibrium as it finds the solution by means of a algorithm. Srinivasan there are no strictly dominated actions available in the solution space. This limits the applicability of the method in multi-agent scenario where mostly weakly-dominated actions are encountered.

Agents are seldom stand-alone systems and usually involve more than one agent working in parallel to achieve a common goal. When multiple agents are employed to achieve a goal, there is a necessity to coordinate or synchronize the actions to ensure the stability of the system.

Coordination between agents increases the chances of attaining a optimal global solution. In [49] major reasons necessitating coordination between the agents were highlighted. These are used to compute the equilibrium action point that could effectively enhance the utility of all the participating agents. Applying constraints on the joint actions requires an extensive knowledge of the application domain.

This may not be readily available. It necessitates the selection of the proper action taken by each agent. It is based on the equilibrium action computed. However, the payoff matrix necessary to compute the utility value of all action choices might be difficult to determine. The dimension of the payoff matrix grows exponentially with the increasing the number of agents and the available action choices.

This may create a bottleneck when computing the optimal solution. The problem of this dimensional explosion can be solved by dividing the game into a number of sub-games that can be more effectively solved. A simple mechanism which can reduce the number of action choices is to apply constraints or assign roles to each agent.

Once a specific role is assigned, the number of permitted action choices is reduced and are made more computationally feasible. This approach is of particular importance in a distributed coordination mechanism.



0コメント

  • 1000 / 1000