Understanding natural — not just artificial — intelligence: stepping back from its artificial deployment. From the editorial:

The challenge of creating humanlike intelligence in machines remains greatly underestimated. Today’s A.I. [artificial intelligence] systems sorely lack the essence of human intelligence: understanding the situations we experience, being able to grasp their meaning….

The lack of humanlike understanding in machines is underscored by recent cracks that have appeared in the foundations of modern A.I. While today’s programs are much more impressive than the systems we had 20 or 30 years ago, a series of research studies have shown that deep-learning systems can be unreliable in decidedly unhumanlike ways….

These potential vulnerabilities illustrate the ways in which current progress in A.I. is stymied by the barrier of meaning. Anyone who works with A.I. systems knows that behind the facade of humanlike visual abilities, linguistic fluency and game-playing prowess, these programs do not — in any humanlike way — understand the inputs they process or the outputs they produce. The lack of such understanding renders these programs susceptible to unexpected errors and undetectable attacks….

A.I. programs that lack common sense and other key aspects of human understanding are increasingly being deployed for real-world applications. While some people are worried about “superintelligent” A.I., the most dangerous aspect of A.I. systems is that we will trust them too much and give them too much autonomy while not being fully aware of their limitations….

The race to commercialize A.I. has put enormous pressure on researchers to produce systems that work “well enough” on narrow tasks. But ultimately, the goal of developing trustworthy A.I. will require a deeper investigation into our own remarkable abilities and new insights into the cognitive mechanisms we ourselves use to reliably and robustly understand the world. Unlocking A.I.’s barrier of meaning is likely to require a step backward for the field, away from ever bigger networks and data collections, and back to the field’s roots as an interdisciplinary science studying the most challenging of scientific problems: the nature of intelligence.

For many other posts on AI, see here

h/t Medium