|
From: | Maarten Sierhuis |
Subject: | Re: [Swarm-Modelling] Something Glen said |
Date: | Thu, 23 Nov 2006 00:28:35 -0800 |
Brahms has activities to create agents, objects, and areas dynamically (i.e. "on the fly") that indeed run as efficient as any other instance. So one agent can create hundreds of other agents all members of different groups with different behaviors, based on some belief it gets. Since an agent can be a member of many groups (using multiple inheritance), when created, it will inherit all behaviors (initial-beliefs, situation-action rules, production rules and activities) from its parent groups. Similarly, objects can inherit from classes. Areas don't have behavior, but can have inhabitants (both object and agents) that can all be set dynamically. Therefore, during runtime you can create things dynamically, and agents can always communicate (send or receive) beliefs to/from each other. Now, "probing" an agent (meaning investigating its internal state) by another agent is not possible by "pure" Brahms agents. However, since Brahms is completely written in Java, there is a Java API with which you can write so called Java Agents (you can also write an activity of an agent in Java, a so called Java Activity). Using the JAPI (Java API) you can "probe" any agent. Thus, if deemed necessary you can do all kinds of "probing" using Java. One note about efficiency: Since Brahms agents (and Brahms objects) all have an individual work-selection and inference engine, Brahms agents are not as efficient/fast as Java agents. However, the speed of execution really depends on the number of rules that can match on an agent's belief-set . The engine is pretty fast, because there has been long knowledge about efficient rule precondition matching (e.g. the RETE algorithm). We had to enhance this algorithm for a multi-agent language, and it is pretty fast since it is an event-based simulation engine, not a clock-based engine. One other way to handle this belief-set "growth" issue is to "purge" beliefs from an agents beliefs-set to make the precondition matching faster (we use this with speech acts that agents send to each other all the time). For the purists, this is something like giving an agent Alzheimer's, because when you delete a belief the agent can't know it ever had it, which is different then forgetting. We're working on better memory management, where agents have a long-term and short-term "memory," where matching of beliefs gets managed by the memory manager, but that is future implementation work. Doei ... MXS On Nov 22, 2006, at 8:42 PM, Marcus G. Daniels wrote:
|
[Prev in Thread] | Current Thread | [Next in Thread] |