Technologists warn about the dangers of the so-called singularity. But can anything actually be done to prevent it? Illustration by Shira Inbar The artificial-intelligence doomers may be onto something. In a rigorous and unsettling new essay, Matthew Hutson weighs the arguments put forth by researchers who contend that computing systems may become so advanced that they escape our control. “In the worst-case scenario envisioned by these thinkers,” Hutson writes, “uncontrollable A.I.s could infiltrate every aspect of our technological lives, disrupting or redirecting our infrastructure, financial systems, communications, and more.” As one computer scientist puts it, “It’s almost like you’re deliberately inviting aliens from outer space to land on your planet, having no idea what they’re going to do when they get here, except that they’re going to take over the world.” Such scenarios have terrifying implications for the human race, but Hutson suggests that we might consider it all from a different angle. “From a sufficiently cosmic perspective,” he writes, “one might feel that coexistence—or even extinction—is somehow O.K. Superintelligent A.I. might just be the next logical step in our evolution.” More from the Annals of Artificial Intelligence Support The New Yorker’s award-winning journalism. Subscribe today » |
No comments:
Post a Comment