A Privacy Hero’s Final Wish: An Institute to Redirect AI’s Future

Yesterday, hundreds in Eckersley’s community of friends and colleagues packed benches for an unusual kind of memorial service at a church-like sanctuary for the Internet Archive in San Francisco — a symposium with a series of talks devoted not only to a memorial to Eckersley as a person but a tour of his life’s work. Facing a shrine to Eckersley at the back of a hall filled with his writings, his beloved road bike, and some samples of his Victorian Gothic wardrobe, Turan, Gallagher, and 10 other speakers gave presentations on Eckersley’s long list of contributions: his years pushing Silicon Valley toward better conservation techniques on privacy, his co-founder of a pioneering project to encrypt the entire web, and his recent life’s focus on improving the safety and ethics of artificial intelligence.

The event also served as a kind of soft launch for the AOI, the organization that will now continue Eckersley’s work after his death. Eckersley envisioned the institute as an incubator and application laboratory working with major AI labs to tackle a problem that Eckersley believes is perhaps more important than the privacy and cybersecurity work to which he has dedicated his decades of work. Career: Reorienting the future of artificial intelligence away from the forces that cause suffering in the world, toward what he describes as “human flourishing”.

“We need to make AI not just who we are, but what we aspire to be,” Turan said in his speech at the memorial service, after playing a recording of the phone call in which Eckersley recruited him. “So it could lift us in that direction.”

Eckersley’s envisioned mission for the AOI stemmed from a growing sense over the past decade that AI suffers from a “fitness problem”: that its development is advancing at an ever-accelerating pace, but with simplistic goals out of step with those of humanity’s health and happiness. Rather than usher in a paradise of abundance and creative entertainment for all, Eckersley believed that on its current course, AI would likely amplify all the forces already destroying the world: environmental destruction, exploitation of the poor, and rampant nationalism, to name a few.

The goal of AOI, as described by Turan and Gallagher, is not to try to curb the progress of AI but to direct it Goals away from those individual destructive forces. This, they argue, is humanity’s best hope for preventing, for example, superintelligent software that can brainwash humans through advertising or propaganda, corporations with godlike strategies and powers to harvest the last hydrocarbon from the Earth, or automated hacking systems that can penetrate any network. In the world to cause global chaos. “AI failures won’t look like nanobots suddenly crawling on us,” Turan says. “These are economic and environmental disasters that will look very special, similar to the things that are happening now.”

Gallagher, the current executive director of the AOI, asserts that Eckersley’s vision for the institute was not that of a doomed Cassandra, but that of a shepherd who could guide an AI toward its utopian dreams for the future. He was never thinking about how to prevent a dystopia. His eternally optimistic way of thinking was, “How do we create a utopia?” “What can we do to build a better world, and how can AI make human flourishing?”


Leave a Reply

Your email address will not be published. Required fields are marked *