New AI system can understand, see like humans
New York: A new computational model performs at human levels when subjected to standard intelligence test, making artificial intelligence (AI) system at par with human understanding capabilities.
Researchers from Northwestern University built the new computational model on CogSketch, an artificial intelligence platform, that has the ability to solve visual problems and understand sketches in order to give immediate and interactive feedback.
“The model performs in the 75th percentile for American adults, making it better than average,” said Ken Forbus of Northwestern University, adding “The problems that are hard for people are also hard for the model, providing additional evidence that its operation is capturing some important properties of human cognition.”
Researchers noted that developing artificial intelligence systems that have this ability not only provides new evidence for the importance of symbolic representations and analogy in visual reasoning, but it could potentially shrink the gap between computer and human cognition.
“Most artificial intelligence research today concerning vision focuses on recognition or labelling what is in a scene rather than reasoning about it,” Forbus noted.
The key to higher-order cognition is the ability to use and understand sophisticated relational representations.
“Relational representations connect entities and ideas such as ‘the clock is above the door’ or ‘pressure differences cause water to flow’. These types of comparisons are crucial for making and understanding analogies, which humans use to solve problems, weigh moral dilemmas, and describe the world around them,” the study published in journal Psychological Review, noted.