When creating a cognitive computational agent, you want the agent to have the power of explanation. Meaning, you don't just want it to be a black box that takes in inputs and gives an output without really explaining why. If that were the case, the agent wouldn't be useful to you. Add in the power of explanation, and now the whole process that your agent followed to create the output becomes visible to you. Now, you can pinpoint the exact route followed by your agent to convert the inputs to the outputs. So, when you use the agent the next time to predict your friends performance in that cognitive task, you won't just be able to tell him what his result will be, you'll be able tell him exactly why his result is the way it is. You'll be able to tell him all the factors that he considered, what were the options he weighed and how exactly he reached the end state of the cognitive task. Isn't that cool?
Continuing with the example of our cognitive task: predicting the winner of a football match, in a previous post, I describe the spreading activation network that we used for the computational agent. Adding in the power of explanation for this agent was tightly coupled with the task of visualization. We had to first visualize the network of teams and their associated attributes as a graph with teams and attributes represented as nodes and the associations between teams and their respective attributes represented as edges. Now, to add in the power of explanation, we highlight the spreading of the activation in the network. For instance, if a game between team A and team B is being predicted, only team A and team B light up. Then, all of the linked attributes to the two teams light up. Next, the activation and inhibition are seen as pushing one team toward the win node and the other team toward the lose node. Thus, the whole process of making a prediction has been visualized and the spreading activation network explains the path it followed toward making the prediction.
No comments:
Post a Comment