Techno Blender
Digitally Yours.

Helen Toner, the effective altruist who sparked the Open AI coup

0 32


Helen Toner, the Melburnian researcher seemingly at the heart of an ideological debate that saw generative AI poster child Sam Altman ousted from OpenAI, will not be remaining on the company’s not-for-profit board in the wake of Altman’s return to the fold.

The story of Altman’s removal and comeback has been a rollercoaster ride since the board’s decision earlier this week, and the episode is the latest flashpoint of an ongoing philosophical conflict under way in Silicon Valley.

While Altman, the face of OpenAI and ‘generative artificial intelligence’ in general, represents the commercial aspirations that underpin the future of the technology, Toner represents the effective altruism movement which wants to maximise good and limit harm from AI.

Helen Toner in May cautioned against over-relying on AI chatbots, saying there was “still a lot we don’t know” about them.

The tug of war between these two ideals, which originally resulted in Altman being shown the door, now appears to have pulled him back in and put Toner on the outside within a matter of days. And OpenAI — which is unusually structured as a charity but with a for-profit arm that has seen products like Chat-GPT become wildly successful in a short amount of time — has become a case study in the clash of visions for artificial intelligence going forward.

Since Toner’s time at Melbourne University she has been associated with effective altruism. Graduating with a bachelor of science in 2014, she went on to working at Melbourne AI companies Draftable and Vesparum, as well as effective altruism charity assessment firm GiveWell.

She joined the Open Philanthropy Project as an analyst in 2015, had a stint doing research on AI governance at the University of Oxford, and in 2019 joined Georgetown University’s Center for Security and Emerging Technology (CSET) as director of strategy, which advises on potentially dangerous implications of AI and other developments. It’s funded in part by effective altruist-linked organisations including Open Philanthropy and the William and Flora Hewlett Foundation, as well as Elon Musk’s Musk Foundation.

In 2021 she graduated from Georgetown with a master’s degree in security studies, and joined Open AI’s non-profit board. In 2022 she also became director of foundational research grants at CSET.

Effective altruism, which was popularised by philosophers including Peter Singer but was most recently in the news because of (disputed) associations with FTX founder Sam Bankman-Fried, encourages people to take careers where they can maximise global positive impact. Some effective altruists are dedicated to funding the most impactful charities, while others focus on animal welfare. Many, like Toner, work to manage long-term existential risks like AI.

“Looking back over history, a small number of major transition points had radically larger effects on how people live and how civilisation functions than many of the smaller changes put together,” she said during a talk on AI risk at the Centre for Effective Altruism in London in 2017.


Helen Toner, the Melburnian researcher seemingly at the heart of an ideological debate that saw generative AI poster child Sam Altman ousted from OpenAI, will not be remaining on the company’s not-for-profit board in the wake of Altman’s return to the fold.

The story of Altman’s removal and comeback has been a rollercoaster ride since the board’s decision earlier this week, and the episode is the latest flashpoint of an ongoing philosophical conflict under way in Silicon Valley.

While Altman, the face of OpenAI and ‘generative artificial intelligence’ in general, represents the commercial aspirations that underpin the future of the technology, Toner represents the effective altruism movement which wants to maximise good and limit harm from AI.

Helen Toner in May cautioned against over-relying on AI chatbots, saying there was “still a lot we don’t know” about them.

Helen Toner in May cautioned against over-relying on AI chatbots, saying there was “still a lot we don’t know” about them.

The tug of war between these two ideals, which originally resulted in Altman being shown the door, now appears to have pulled him back in and put Toner on the outside within a matter of days. And OpenAI — which is unusually structured as a charity but with a for-profit arm that has seen products like Chat-GPT become wildly successful in a short amount of time — has become a case study in the clash of visions for artificial intelligence going forward.

Since Toner’s time at Melbourne University she has been associated with effective altruism. Graduating with a bachelor of science in 2014, she went on to working at Melbourne AI companies Draftable and Vesparum, as well as effective altruism charity assessment firm GiveWell.

She joined the Open Philanthropy Project as an analyst in 2015, had a stint doing research on AI governance at the University of Oxford, and in 2019 joined Georgetown University’s Center for Security and Emerging Technology (CSET) as director of strategy, which advises on potentially dangerous implications of AI and other developments. It’s funded in part by effective altruist-linked organisations including Open Philanthropy and the William and Flora Hewlett Foundation, as well as Elon Musk’s Musk Foundation.

In 2021 she graduated from Georgetown with a master’s degree in security studies, and joined Open AI’s non-profit board. In 2022 she also became director of foundational research grants at CSET.

Effective altruism, which was popularised by philosophers including Peter Singer but was most recently in the news because of (disputed) associations with FTX founder Sam Bankman-Fried, encourages people to take careers where they can maximise global positive impact. Some effective altruists are dedicated to funding the most impactful charities, while others focus on animal welfare. Many, like Toner, work to manage long-term existential risks like AI.

“Looking back over history, a small number of major transition points had radically larger effects on how people live and how civilisation functions than many of the smaller changes put together,” she said during a talk on AI risk at the Centre for Effective Altruism in London in 2017.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment