Skip to main content

Self driving cars don't know that Snowman won't cross the road?

Picture yourself driving down a city street. You go around a curve, and suddenly see something in the middle of the road ahead. What should you do?

Of course, the answer depends on what that ‘something’ is. A torn paper bag, a lost shoe, or a tumbleweed? You can drive right over it without a second thought, but you’ll definitely swerve around a pile of broken glass. You’ll probably stop for a dog standing in the road but move straight into a flock of pigeons, knowing that the birds will fly out of the way. You might plough right through a pile of snow, but veer around a carefully constructed snowman. In short, you’ll quickly determine the actions that best fit the situation – what humans call having ‘common sense’.

Human drivers aren’t the only ones who need common sense; its lack in artificial intelligence (AI) systems will likely be the major obstacle to the wide deployment of fully autonomous cars. Even the best of today’s self-driving cars are challenged by the object-in-the-road problem. Perceiving ‘obstacles’ that no human would ever stop for, these vehicles are liable to slam on the brakes unexpectedly, catching other motorists off-guard. Rear-ending by human drivers is the most common accident involving self-driving cars.

The challenges for autonomous vehicles probably won’t be solved by giving cars more training data or explicit rules for what to do in unusual situations. To be trustworthy, these cars need common sense: broad knowledge about the world and an ability to adapt that knowledge in novel circumstances. While today’s AI systems have made impressive strides in domains ranging from image recognition to language processing, their lack of a robust foundation of common sense makes them susceptible to unpredictable and unhuman like errors.

Common sense is multifaceted, but one essential aspect is the mostly tacit ‘core knowledge’ that humans share – knowledge we are born with or learn by living in the world. That includes vast knowledge about the properties of objects, animals, other people and society in general, and the ability to flexibly apply this knowledge in new situations. You can predict, for example, that while a pile of glass on the road won’t fly away as you approach, a flock of birds likely will. If you see a ball bounce in front of your car, for example, you know that it might be followed by a child or a dog running to retrieve it. From this perspective, the term ‘common sense’ seems to capture exactly what current AI cannot do: use general knowledge about the world to act outside prior training or pre-programmed rules.

Today’s most successful AI systems use deep neural networks. These are algorithms trained to spot patterns, based on statistics gleaned from extensive collections of human-labelled examples. This process is very different from how humans learn. We seem to come into the world equipped with innate knowledge of certain basic concepts that help to bootstrap our way to understanding – including the notions of discrete objects and events, the three-dimensional nature of space, and the very idea of causality itself. Humans also seem to be born with nascent concepts of sociality: babies can recognise simple facial expressions, they have inklings about language and its role in communication, and rudimentary strategies to entice adults into communication. Such knowledge is so elemental and immediate that we aren’t even conscious we have it, or that it forms the basis for all future learning. A big lesson from decades of AI research is how hard it is to teach such concepts to machines.

Children learning innately
Human child learns lot of things innately, something that AI can't. 

On top of their innate knowledge, children also exhibit innate drives to actively explore the world, figure out the causes and effects of events, make predictions, and enlist adults to teach them what they want to know. The formation of concepts is tightly linked to children developing motor skills and awareness of their own bodies – for example, it appears that babies start to reason about why other people reach for objects at the same time that they can do such reaching for themselves. While today’s state-of-the-art machine-learning systems start out as blank slates, and function as passive, bodiless learners of statistical patterns; by contrast, common sense in babies grows via innate knowledge combined with learning that’s embodied, social, active and geared towards creating and testing theories of the world.

The history of implanting common sense in AI systems has largely focused on cataloguing human knowledge: manually programming, crowdsourcing, or web-mining commonsense ‘assertions’ or computational representations of stereotyped situations. But all such attempts face a major, possibly fatal obstacle: much (perhaps most) of our core intuitive knowledge is unwritten, unspoken, and not even in our conscious awareness.

The US Defense Advanced Research Projects Agency (DARPA), a major funder of AI research, recently launched a four-year programme on ‘Foundations of Human Common Sense’ that takes a different approach. It challenges researchers to create an AI system that learns from ‘experience’ in order to attain the cognitive abilities of an 18-month-old baby. It might seem strange that matching a baby is considered a grand challenge for AI, but this reflects the gulf between AI’s success in specific, narrow domains and more general, robust intelligence.

Need I say more. Copyright:

Core knowledge in infants develops along a predictable timescale, according to developmental psychologists. For example, around the age of two to five months, babies exhibit knowledge of ‘object permanence’: if an object is blocked by another object, the first object still exists, even though the baby can’t see it. At this time babies also exhibit awareness that when objects collide, they don’t pass through one another, but their motion changes; they also know that ‘agents’ – entities with intentions, such as humans or animals – can change objects’ motion. Between nine and 15 months, infants come to have a basic ‘theory of mind’: they understand what another person can or cannot see and, by 18 months, can recognise when another person displays the need for help.

Since babies under 18 months can’t tell us what they’re thinking, some cognitive milestones have to be inferred indirectly. This usually involves experiments that test ‘violation of expectation’. Here, a baby watches one of two staged scenarios, only one of which conforms to commonsense expectations. The theory is that a baby will look for a longer time at the scenario that violates her expectations, and indeed, babies tested in this way look longer when the scenario does not make sense.

In DARPA’s Foundations of Human Common Sense challenge, each team of researchers is charged with developing a computer program – a simulated ‘commonsense agent’ – that learns from videos or virtual reality. DARPA’s plan is to evaluate these agents by performing experiments similar to those that have been carried out on infants and measuring the agents’ ‘violation of expectation signals’.

This won’t be the first time that AI systems are evaluated on tests designed to gauge human intelligence. In 2015, one group showed that an AI system could match a four-year-old’s performance on an IQ test, resulting in the BBC reporting that ‘AI had IQ of four-year-old child’. More recently, researchers at Stanford University created a ‘reading’ test that became the basis for the New York Post reporting that ‘AI systems are beating humans in reading comprehension’. These claims are misleading, however. Unlike humans who do well on the same test, each of these AI systems was specifically trained in a narrow domain and didn’t possess any of the general abilities the test was designed to measure. As the computer scientist Ernest Davis at New York University warned: ‘The public can easily jump to the conclusion that, since an AI program can pass a test, it has the intelligence of a human that passes the same test.’

I think it’s possible – even likely – that something similar will happen with DARPA’s initiative. It could produce an AI program specifically trained to pass DARPA’s tests for cognitive milestones, yet possess none of the general intelligence that gives rise to these milestones in humans. I suspect there’s no shortcut to actual common sense, whether one uses an encyclopaedia, training videos or virtual environments. To develop an understanding of the world, an agent needs the right kind of innate knowledge, the right kind of learning architecture, and the opportunity to actively grow up in the world. They should experience not just physical reality, but also all of the social and emotional aspects of human intelligence that can’t really be separated from our ‘cognitive’ capabilities.

While we’ve made remarkable progress, the machine intelligence of our current age remains narrow and unreliable. To create more general and trustworthy AI, we might need to take a radical step backward: to design our machines to learn more like babies, instead of training them specifically for success against particular benchmarks. After all, parents don’t directly train their kids to exhibit ‘violation of expectation’ signals; how infants behave in psychology experiments is simply a side effect of their general intelligence. If we can figure out how to get our machines to learn like children, perhaps after some years of curiosity-driven, physical and social learning, these young ‘commonsense agents’ will finally become teenagers – ones who are sufficiently sensible to be entrusted with the car keys.

Published in association with the Santa Fe Institute, an Aeon Strategic Partner.Aeon counter – do not remove
Melanie Mitchell
This article was originally published at Aeon and has been republished under Creative Commons.

If you found this post interesting, why not read our other popular post,
5 things driverless cars will do to change our future! 


Popular posts from this blog

Do free energy magnetic motors really work?

The internet is rife with websites that promote generators that are capable of providing electricity without using any fuel. Built largely with magnets, these 'free energy generators' promise to cut your electricity bills and provide a much greener alternative to the electricity that is largely generated out of fossil fuels. Elaborate videos that give you estimates of how much money you can save without revealing any details of how to go about it, manage to keep the audience hooked on for a while, but $40 price tag, the loads of freebies and the instant $10 discount for not leaving the page, make the product and its seller highly suspicious. So, we decided to find out if these free energy magnetic motors really work?

The Principle

The magnetic motor works on the simple principle that we all already know, 'Like poles repel each other while opposite poles attract each other'. By arranging the magnets in a fashion where only like poles face each other, one can simply set t…

Why Sci-Hub’s story is so crucial to science?

On the 28th of October 2015, Judge Robert Sweet in his ruling at the New York district court declared that the website be blocked with immediate effect and managed to stop hundreds and thousands of researchers and science enthusiasts from accessing the holy grail of today’s science, the research paper.
What should be a simple means to communicate to the world one’s research findings, has become a currency of some sort. A ticket to a researcher’s professional success, a magnet for an investigator to attract funding for his lab and the elusive piece of the puzzle that the publishing group can hold you ransom for, until you cough up some good cash ($30 or above for a single article and thousands of dollars for a bundled annual subscription)
What Judge Sweet termed as a “disservice (to) public interest”, is actually a small website that allows you access to scientific research, old and new, and for free. Sci- Hub. Org, started in 2011, as a trusted place to access research …

Generating electricity from flapping tree leaves

As kids, you might have spent many afternoons, under a huge tree, enjoying its shade. In a tropical country like India, trees are a welcome sight in the month of May, when the sun is blazing in the sky and the shade offered by them is a hundred thousand times better than artificial cooling of the air conditioning units. But never in our dream would we have thought that the rustling of the tiny leaves of the trees could one day make electricity for us.Because that requires a Hendersonian moment! (just in a bit)

This brilliant idea has come from the lab of a biophysicist at Iowa State University, Dr. Michael McCloskey, whose work at the University largely involves the study of membrane transport in algae and adult born neurons but also has a background in plant sciences. It was his colleague in the department of genetics, Dr. Eric Henderson who first came up with this plan of harvesting energy from leaves as he wondered how much kinetic energy was being generated when winds blow across l…

5 things driverless cars will do to change our future?

The race for building the world’s first commercially available driverless car is on. Google seems to be leading the pack and in its own charismatic style has been very open about it. Elon Musk’s Tesla is considered the second best with their cars having almost automated the driving process. Tech favourites, Apple also seem to be in the race but everything is under wraps, as of now, and there is not even a hint of what Apple is planning to make, the car, the software or simply make the car accessible with your Apple ID.
Once part of science fiction, driverless cars will soon be a part of our lives and with major automobile manufacturers such as General Motors, Toyota, Ford investing in the technology, prototypes of driverless cars will soon be seen on the roads. Before we get there, a quick review.
The Driverless car
The concept of automated driving has been around for close to a century but progress was slow due to unavailability of technology. For a car to be autonomous, it needs to kno…

Solar cells that work in rain

In case you have read my last month’s guest post about harvesting solar energy in rust, you would be delighted to know that there has been yet another breakthrough in our attempt to harness solar energy.  For many years, solar energy has been targeted for being unavailable at night and during rains. The problem of utilizing solar energy at night can be resolved with the help of metal oxide cells as elaborated in my above post (do read it, if you have not done so already). And now researchers at the Ocean University in China have addressed the second problem and developed solar cells that can actually use rain drops to generate electricity.
Published in the German journal Angewandte Chemie, the paper titled, A Solar Cell Triggered by Sun and Rain, opens a new realm of possibilities when harnessing solar energy. Coating the solar cell with a thin film of graphene allows the cell to function even when it is raining. Graphene is nothing but reduced form of graphite that consists of a hone…