To those fear mongering technophiles who consider AI as an existential threat to humanity, here’s something that will most likely hurl you down to the pit of exasperation.

In a first, computer engineers at the University of Texas at Austin have trained an artificial intelligence (AI) agent to perceive the environment much like how humans do just by throwing a few quick glimpses at the surrounding.

AI sees like a human

Unlike much of AI agents which are taught to identify an object or estimate its volume and tasked for environment they are well-versed with, like in a factory, the newly developed agent can be assigned for basically any purpose. Also with enhanced ability to gather visual information, it can be used for wide variety of applications, including in the development of effective search-and-rescue robots.

“We want an agent that’s generally equipped to enter environments and be ready for new perception tasks as they arise,” says Professor Kristen Grauman, the lead author of the study. “It behaves in a way that’s versatile and able to succeed at different tasks because it has learned useful patterns about the visual world.”

To develop this AI system, the team turned to deep learning – an advanced type of machine learning based on functioning of the brain’s neural networks. And in order to train it, they keyed in thousands of 360-degree images of sortable environments. The next step was to provide the agent with a scene it has no prior experience with, and when it’s done so, the agent used its skill to choose a few glimpses that amount to less than 20 percent of the full scene.

What’s so striking about this arrangement is that after each glimpse, the next shot that the agent estimates will add the most new information about the whole scene. And on the basis of those glimpse, the agent then guesses what it would have seen if it had looked the other ways, and rebuilds a full 360-degree image of its surroundings.

“Just as you bring in prior information about the regularities that exist in previously experienced environments — like all the grocery stores you have ever been to — this agent searches in a nonexhaustive way,” Grauman said. “It learns to make intelligent guesses about where to gather visual information to succeed in perception tasks.”

For now, the agent is entrenched at a spot, and hence it can’t move. However, it can point a camera in any direction or gaze upon the object in its grip and decide how to turn it to inspect the other side. Soon, the team will venture on to develop a more advanced agent with the ability to perform complex manoeuvre and work under tight time constraints, which would be useful in search-and rescue operation.

The AI agent was coached using the supercomputers, and it took about a day to train it using an artificial intelligence approach called reinforcement learning. To expedite the training process, the team is developing a subordinate agent called a sidekick to assist the main agent.

“Using extra information that’s present purely during training helps the [primary] agent learn faster,” said Santhosh Ramakrishnan, a Ph.D. candidate who was involved in the study.

The study has been published in the journal Science Robotics, and it’s titled “Emergence of exploratory look-around behaviors through active observation completion”

Source: The University of Texas at Austin