In May, more than 350 technology leaders, researchers, and academics signed a statement warning about the existential dangers of artificial intelligence. “Reducing the risk of extinction from A.I. should be a global priority,” the signatories said, along with preventing pandemics and nuclear war; this happened after another high-profile letter, signed by people like Elon Musk and Apple co-founder Steve Wozniak, called for a six-month halt to developing advanced A.I. systems.
The Biden administration has called for A.I. to be developed responsibly, saying, “to take advantage of the opportunities it offers, we must first manage its risks.” Senator Chuck Schumer in Congress called for “first-of-their-kind” listening sessions on the potential and risks of A.I., which would be like a crash course taught by business leaders, academics, civil rights campaigners, and other stakeholders.
The growing worry about A.I. isn’t because of the boring but reliable technologies that finish our text messages for us or tell our robot vacuums to avoid hurdles in our living rooms. The experts are worried about the rise of artificial general intelligence, or A.G.I.
A.G.I. doesn’t exist yet, but some people think it will soon because OpenAI’s ChatGPT is getting smarter and wiser quickly. Sam Altman, one of the people who started OpenAI, has said it comprises “systems that are generally smarter than humans.” Building these systems is still a hard, and some would say impossible, job. But the perks could be very tempting.
Imagine if Roombas could do more than hoover the floors. They could become all-purpose robots making coffee in the morning or folding clothes without ever being told to do these things.
It sounds good. But if these A.G.I. Roombas get too smart, and their goal to make a spotless utopia might get messy for their dust-spreading human masters. At least we’ve done well so far.
Such end-of-the-world events come up a lot when people talk about A.G.I. But a growing group of academics, businessmen, and business owners say that A.G.I. would benefit society once it was made safe. Mr Altman, the face of this effort, went on a world tour to win over politicians. Earlier this year, he wrote that A.G.I. could even speed up the economy, improve science, and “raise humanity by making more of everything available.”
Despite all the worry, so many intelligent people in the tech industry are working hard to make this problematic technology: it seems wrong not to use it to save the world.
They are bound by an ideology that says this new technology is unavoidable and, in its safest form, suitable for everyone. People who support it can’t think of any better way to fix humanity and make it brighter.
But this idea, which people call “A.G.I.-ism,” is wrong. The real dangers of A.G.I. are political, and taming unruly robots won’t fix them. Even the safest A.G.I. wouldn’t be the progressive panacea that its group said it would be. And by making it seem like it’s almost certain to happen, A.G.I.-ism takes attention away from finding better ways to boost intelligence.