A.I. researchers are closing in on a long-term goal: giving their programs the kind of knowledge we take for granted. Illustration by Nicholas Konrad / The New Yorker Imagine you’re watching the news and a headline flashes onscreen: “CHEESEBURGER STABBING.” What are you most likely to assume: that someone stabbed a cheeseburger, that a cheeseburger had been used to stab someone, that a cheeseburger had stabbed another cheeseburger . . . or that someone had stabbed someone else in an argument over a cheeseburger? As Matthew Hutson writes, in a fascinating piece on the quest for common sense in artificial intelligence, the problem of a “CHEESEBURGER STABBING,” among many other scenarios, leaves computers stumped. “They lack the common sense to dismiss the possibility of food-on-food crime,” he writes. For humans, “life is cornery,” full of twists and turns, but computers lack our “vast reservoir of implicit knowledge.” There are tasks that A.I. can perform better than people—like playing chess or detecting tumors—but, on a basic level, computers still need to develop “baby sense,” the skills that small children use to navigate and understand the world. Follow along as Hutson talks with leading computer scientists, and find out whether a computer can play the party game Codenames, what an A.I. called PIGLeT can tell you about eggs, and how close we really are to having software that can outsmart—and out-sense—humans. —Jessie Li, newsletter editor |
No comments:
Post a Comment