In the last post, I talked about the power of explanation, which allows your cognitive computational agent to explain to you how it reached from the inputs to the outputs. In this post, I talk about another power: the power of learning. The idea with this power is for the agent to learn to do the cognitive task the way humans learn.
In our spreading activation network, this was done by using language processing. You speak sentences to the agent like "Germany won the world cup". It automatically picks up Germany as a team and creates a node in the network called Germany, picks up world cup as an attribute and creates a world cup node in the network and links it to the Germany node and finally, it picks up 'won' and causes the Germany link to world cup to push Germany toward the win side. This is exactly how humans learn. Think about the last time you read a news article about a sports team you did not know much and how certain facts stuck in your brain in the form of concepts linked to that sports team.
Another very powerful way of learning is learning through mistakes. We make predictions and then compare the predictions to actual results. If the result matches our prediction, the underlying model that led to the prediction is strengthened. On the other hand, if the actual result does not match our prediction, the underlying model is modified or certain links are weakened. In our cognitive agent, we have implemented this ability of learning from mistakes. So, we feed the agent actual results of the matches that it predicts. Using these results, the agent learns which means that certain links in the underlying spreading activation network are strengthened while others may be weakened. In summary, two ways of learning: 1) learning from language, and 2) learning from mistakes have been implemented in our cognitive agent. While in this post, I talk about these two aspects from the standpoint of our project, these are two general ways of learning that anyone doing a cognitive science project should consider implementing.
In our spreading activation network, this was done by using language processing. You speak sentences to the agent like "Germany won the world cup". It automatically picks up Germany as a team and creates a node in the network called Germany, picks up world cup as an attribute and creates a world cup node in the network and links it to the Germany node and finally, it picks up 'won' and causes the Germany link to world cup to push Germany toward the win side. This is exactly how humans learn. Think about the last time you read a news article about a sports team you did not know much and how certain facts stuck in your brain in the form of concepts linked to that sports team.
Another very powerful way of learning is learning through mistakes. We make predictions and then compare the predictions to actual results. If the result matches our prediction, the underlying model that led to the prediction is strengthened. On the other hand, if the actual result does not match our prediction, the underlying model is modified or certain links are weakened. In our cognitive agent, we have implemented this ability of learning from mistakes. So, we feed the agent actual results of the matches that it predicts. Using these results, the agent learns which means that certain links in the underlying spreading activation network are strengthened while others may be weakened. In summary, two ways of learning: 1) learning from language, and 2) learning from mistakes have been implemented in our cognitive agent. While in this post, I talk about these two aspects from the standpoint of our project, these are two general ways of learning that anyone doing a cognitive science project should consider implementing.