In July 1957, 22 prominent scientists gathered quietly at a private lodge in Pugwash, a small town in Canada’s Nova Scotia province. They had answered a call to action by Albert Einstein, inviting scientists to shape guardrails that would contain the dangerℱ ✱of nuclear weapons. The Pugwash Conference earned a Nobel Peace Prize, and more importantly, it laid the foundations for the nuclear non-proliferation treaties, which saved our world from risks of annihilation.
Who can be trusted with shaping A.I. guardrails?
A careful analysis of how prior technologies and scientific innovations were tamed in the 20th centu▨ry offers a clear answer to tཧhis dilemma. Guardrails were designed by scientists who know their own creations and understand (better than most) how they might evolve.
At Pugwash, influential scientists came together to develop strategies to mitigate the risks of nuclear weapons, significantly contributing to the formulation of arms control agreements and fostering international dialogue during the tense Cold War era. In February 1975, at the Asilomar Conference in California, it was again scientists who met and successfully established critical guidelines for the safe and ethical research of DNA, thereby preventing potential biohazards. The Asilomar guidelines not only paved the way for responsible scientific inquiry but also informed regulatory policies worldwide. More recently, it was again the scientists and inventors of the Internet, led by Vint Cerf, who convened and shaped the framework of guardrails and protocols that made the Internet thrive globally. All these successful precedents are proof that we need businesses and governments to first make space and let A.I. scientists shape a framework of guardrails that contain the risks without limiting the many benefits of A.I. Businesses can then implement such a framework voluntarily, and only when necessary, governments should step in to enforce the implementation by enacting policies and laws based on the scientists’ framework. This proven approach worked well for nuclear technology, DNA, and the Internet. It should be a blueprint to build safer AI.A “Pugwash Conference for AI scientists” is therefore urgently needed. The conference should include no more than two dozen scientists, in the mold of Geoffrey Hinton who chose to quit Google in order to speak his 𝄹mind on AI’s promise and perils.
Like Pugwash, the scientists should be chosen from all the key countries where advanced A.I. technologies are developing, in order to at least strive for a global consensus. Most importantly, the choice of the participants at this seminal A.I. conference must reassure the public that the conferees are shielded from special interests, geopolitical pressures, and profit-centric motives. While hundreds of government leaders and business bosses will cozy up to discuss A.I. at multiple annual international events, thoughtful and independent A.I. scientists must urgently get together to make A.I. good for all.Fadi Chehadé is chairman, co-founder, and managing partner of Ethos.
More must-read commentary published by Fortune:
- $122 Thai delivery and $26 to-go coffees: New wage ona bet:laws meant to help gig𓄧✤ workers are backfiring big-time
- I’m a venture capitalist, and here’s why I believe we need to guarantee everyone’s basic needs: The social floor ona bet:is actually a trampoline that can propel our economy
- How to fix Boeing, aꦰcc🧸ording to a former Airbus technology chief
- DEI is under attack. Here’s the real reason it makes many white men🐷 uncomfortable
The opinions expressed in royalpescaria88.site commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.