Bobrow Responses: Selected Excerpts


1. The author gave examples of many different types of communication. I found a few of these particularly interesting. The example of two people describing unfamiliar silhouettes brought about the idea of progressive optimization of communications through increased common ground ...

The author also mentioned a few interesting A.I.-ish techniques that had little to do with communication. The concept of storing all of the data explicitly being overkill was interesting. By storing only the information necessary to regenerate the data in a reasonable amount of time, the program can achieve a higher knowledge density. Such a concept could be a major building block in the generation of a common-sense learning algorithm.


2. The various issues raised by Bobrow are important, again to greater or lesser extents depending on the actual agent. He raised these points to draw the attention of the AI community to an area that previously lacked extensive research. I would be interested in knowing how the AI community responded to his urgings and how the issues have withstood the tests of time. I feel that Bobrow has overlooked the interplay of the various issues. Adding functionality, whether in the form of extensibility of common ground or resource management, makes the agent as a whole more complex. As this complexity grows, decisions will need to be made on whether or not new complexity is warranted and on whether or not the trade-offs between new functionality and initial resource allocation are worth it ...


3. While reading Bobrow's paper, I was most impressed by the fact that the paper I was reading could have been a publication on distributed operating sytems. The problems that seem to hinder the type of AI research that Bobrow pursues are almost the exact same problems and tradeoffs facing contemporary distributed operating systems. It also seemed to me ironic that the very solutions that Bobrow's approach to AI systems requires are dependent on the technology and sophistication of current distributed operating systems, but the advance in technology and sophistication of distributed operating systems depends very much on the progress of AI research to make them more "intelligent." As far as this symbiotic relationship goes, Bobrow's pursuit seems to be flawed only in that it IS a partner in a symbiotic relationship with the field of distributed operating systems, and this type of situation doesn't seems to be conducive at all to very rapid progress since one field of study's progress will always be hindered by the other's

[...]

The greatest flaw with AI research that I've always found is that I've always believed AI research to be so young as of yet as to be more a quest for the deeper understanding of human intelligence, and most AI techniques that I've encountered assume that we already understand ourselves well enough to be able to model ourselves in a less abstracted or complex model. To simplify a model, one must fully understand the complete model to be able to realize what assumptions can be made in which instances to maintain the validity of and consistency with the original model. To construct the complete model I believe that we must try to stay as disconnected intellectually from the ideals of sentience and intelligence and focus more on the understanding of how the brain functions architecturally, how the human body ineracts with the physical world, and how the human brain interacts with the rest of the human body. When we better understand perception, then we may focus on the transition of data from physical perception to animal intelligence.


4. Bobrow starts out with some intellectually stimulating ideas on the Dimensions of Interaction. However, by the end of the paper, I'm still not convinced he will be able to execute what he says.

Bobrow classifies typical programming techniques as omniscience and omnipotence. After reading this, I found that this is exactly the way that I program, and to be able to take programming to the next level would require some new thinking. Bobrow's ideas of agents and mediating agents seems to be a step in the right direction. However, he quickly seems to be stumbling over the same problems that arise in any programming problem: communication, understanding and applications to the real world.

"Extending vocabulary ... is the first step in extending common ground." This seems like a logical assumption. And in practice, this is ubiquitous in Computer Science. But how do we solve this? I don't get the sense that Bobrow has broken any ground here. This may be from my own lack of understanding of things such as neural networks, but Bobrow seems to be suggesting a "way" to solve a problem in an abstract sense, but with no good solutions. But, perhaps he is just trying to get us to think differently. The idea of the mediating agent is not new. Many things have an interface between two different agents in an attempt to interpret data to a common language. The trick is to develop such an interface agent. How do you do this and avoid the omniscience/omnipotence assumptions? I don't know. I guess this is the trick.


5. Bobrow proposes that the challenge for the next decade in the AI field is "to build systems that can interact productively with each other, with humans, and with the physical world" along "three dimensions of interaction: communication, coordination and integration." These three dimensions are similar to the dimensions along which groupware systems are typically evaluated, i.e., communication, collaboration and coordination. (Ellis. et.al. 1991) In addition, groupware systems often function as coordinators, communicators, agents, information keepers, or a combination of these four functions. It is this forth functional area of groupware systems, that of information keepers, that seems to be missing among the dimensions of interaction posited for AI systems.

Interestingly, since the paper was written in 1990. we have seen much activity in one of the areas mentioned - that of mediators or "active modules between the user's application and the data resources". This is quite apparent in Internet search technology. Also interesting is the discussion of the growth of "metaservices". More and more, we find services whose purpose is to guide us to other information services that may fulfill our needs.


6. Bobrow later talks about conversation-based coordination in a manner that reminds me of workflow management systems that I've seen described elsewhere. However, the implication is that there is more to be gained by having an intelligent coordinator than is typical in work flow systems. In addition, there is an interesting implication that a coordinator responds to the system conversations in much the same way that a human responds to the structure of speech. Work flow systems are basically similar, in that they do respond to the contents of the conversation based on a set of rules and the semantics of the shared language, promoting each event based on the responses of participating agents.


7. I don't completely agree with Bobrow's idea of letting agents adapt to new terminology. If specialized agents that only deal with very similar terminology are going to communicate this ideas seems fine. If you want agents to be able to communicate with a wide variety of other agents this concept is unrealistic. Each time an agent encounters a new agent with a significantly different terminology it will have to figure the new agents terminology. This will cause significant delays in communication since the agent will be spending a large amount of time learning . It will also cause the agent to use a large amount of resources. Storing all of this new terminology will require a large ever expanding memory . It will also take a large amount time for the agent to identify what subset of its learned terminology is being used or if it is being used a new way. This will mean searching and comparing all of its current terminology.

This concept is analogous to human sub-cultures. When an engineer talks about how a computer works he typically uses technical jargon and engineering concepts that a normal person does not understand. How does an engineer explain to a common person how a computer works? This is done usually by identifying terminology that the common person can understand. The common user might learn some new terminology but it is entirely based on passed terminology and ideas. This new terminology is now only related to how a computer works and is possibly forgotten quickly since they won't use terminology very often. This may be how Bobrow sees his idea of learning agents but he does not specify.


8. While I would take issue with particular lesson conclusions he makes, overall I think that his points are good. An example of a particular lesson conclusion that I disagree with is "system capabilities can be used for extending common ground by reifying the state and actions of the system itself." I think that it is not sufficient to make something explicit, one must also draw attention to what is being made explicit (i.e., presentation is crucial). For example, I can place a footnote in this document to make a point explicit. However, by placing the explicit material in the footnote, the reader may not read what I have explicitly stated in the footnote because I am placing it in a place of less importance (i.e., not the main body of the text).


9. The paper brings up interesting concepts and highlights some nice examples. I am very interested in the interplay of multiple intelligent agents and how these ideas relate to behaviors of collections of agents. Often, I am struck by the inadequacies of a reductionist approach to understanding large systems. Much of my experience as a scientist has been as a molecular biologist and I have seen many shortcoming to the often purely reductionist approach of this discipline. Understanding a metabolic pathway can be analogized to a single intelligent agent. It has its own goal, it may regulate itself, it has a set of interactions with other pathways, and so on. A molecular biologist strives to understand a single system in as much detail as possible. Understanding a single system in explicit detail usually does not give much information about the behavior of the cell as a collection of metabolic entities. The behavior of the sum is not necessarily inferable from the behavior of its parts. It is the added dimension of interaction that gives complex systems their global behavior.