Techno Blender
Digitally Yours.

Towards Data Science Podcast Finale: The future of AI, and the risks that come with it | by Jeremie Harris | Oct, 2022

0 84


APPLE | GOOGLE | SPOTIFY | OTHERS

Editor’s note: The TDS Podcast is hosted by Jeremie Harris, who is the co-founder of Gladstone, an AI safety startup. Every week, Jeremie chats with researchers and business leaders at the forefront of the field to unpack the most pressing questions around data science, machine learning, and AI.

Two years ago, I told the Toward Data Science community that we were going to take this podcast in a different direction. Until then, our guests and our conversations had been focused on industry applications of machine learning and data visualization, data science career development strategies, and other related topics. And that made sense: back then I was running a company called SharpestMinds, which ran a mentorship program for data scientists and machine learning engineers. I co-founded SharpestMinds with my brother Ed, who’s been on the podcast before. SharpestMinds was the third company we’d built together, but importantly it was the first one that actually worked. We lucked out, had a great team around us, and the company became the world’s largest data science and machine learning mentorship program based on income share agreements.

But in early 2020, everything changed. That’s when OpenAI announced to the world that they’d created GPT-3, and more importantly, it’s the moment we realized that the recipe that OpenAI used to build GPT-3 could very likely be extended much, much further — potentially all the way to building general-purpose AIs that might meet our exceed human intelligence across a wide range of tasks.

At the same time, Ed and I were aware of the concerns of the AI alignment community, which for years had been arguing that these sorts of broadly intelligent AI systems may eventually become uncontrollable, and pose catastrophic risks. As wild as those claims seemed at first, the more we looked into them over the years, the harder it became to deny that they were getting at something that was actually pretty legitimate. And as we started to talk to AI risk skeptics, we came away pretty disappointed and underwhelmed with their arguments.

And so, with these concerns about potential catastrophic risks from advanced AI in one hand, and a promising path towards building precisely those kinds of systems in the other, we felt that we couldn’t justify working on anything other than AI safety. So we made the difficult decision to leave SharpestMinds to our two earliest employees, and take the plunge into the world of AI safety, with no real idea about where to start.

Around that time, I reached out to the editorial team at Towards Data Science about our decision. They put forward an idea: maybe we could use the TDS podcast as a platform to explore AI safety and share some of those ideas with the wider world, as we continued our journey into AI alignment.

I loved the concept. And that’s why two years ago, we moved the podcast in that new direction. Since then, a lot has happened behind the scenes. For one, Ed and I co-founded an AI safety company with a good friend of ours who had also been following the AI safety story closely, and who was a senior leader in the US defense world. We’ll have some exciting things to announce there soon, and I can genuinely say that I never expected that we’d be able to make the impact we have in AI safety this quickly.

But the concerns that brought us here have also come into sharper focus. GPT-3 did indeed trigger a revolution in AI capabilities, and the field does seem to be coming to the view that AI scaling may get us much of the way to human-level or superhuman AI. At the same time, we’ve started to see empirical evidence that suggests we should expect catastrophic outcomes from developing those kinds of systems by default.

And as we’ve gone deeper into this space, my time has been consumed more and more by keeping up with the state of the art of AI, and AI safety, which has left me with less time to explore other topics on the podcast. And that’s challenging, because I know that many of our listeners — the TDS community — do still want to hear about other things. Things like data visualization and pipeline development and data science tooling.

So, as important as state of the art AI is, and as important as AI safety is, I think the deeper exploitation of those topics that I have to do does belong in a slightly different venue. For that reason, this will be the last episode of the TDS podcast, at least for now. I’ll still be exploring these topics going forward on the new Gladstone AI podcast, which you can find linked in the show notes, and we’ll continue to publish those episodes on the TDS blog, but as an independent project better suited to exploring these specific ideas. If you’d like to follow me there, I’d be honoured to have you join me for the rest of this journey.

In this final episode, I’ll offer my perspective on the last two years of AI progress, and what I think it means for everything from AI safety to the future of humanity.

You can find the links I discuss in the episode below.

  • The new Gladstone AI podcast, where I’ll be talking about one new, cutting-edge AI model each week in plain English (its use cases, its potential malicious applications, and its relevance to AI alignment risk).
  • My upcoming book: Quantum Mechanics Made Me Do It.
  • Our two episodes discussing instrumental goals in AI safety: Alex Turner and Edouard Harris.
  • 80,000 Hours: a website where you can get advice on how to contribute to solving AI safety and AI policy problems.
  • Concrete Problems in AI Safety: an oldie but a goodie, that introduces a lot of the central problems in AI alignment that remain open to this day.
  • Some episodes to check out if you’re interested in AI policy (in order of recency): Ryan Fedasiuk, Rosie Campbell, Ben Garfinkel, and Helen Toner.
  • Some episodes to check out if you’re interested in technical AI alignment (in order of recency): Irina Rish, Alex Turner, Jan Leike, Daniel Filan, Andy Jones, Evan Hubinger, Brian Christian, Ryan Carey, Stuart Armstrong, Ethan Perez, Anders Sandberg, David Krueger, Rob Miles, Dylan Hadfield-Menell, Rohin Shah, and Edouard Harris.
  • My appearances discussing AI safety on other podcasts: The Evan Solomon Show, Policy Options, Super Data Science, The Banana Data Podcast, Ken Jee’s podcast, Calls From The Future, and Data Bytes.
  • Gladstone AI’s AI model tracker: aitracker.org.

Chapters:

  • 0:00 Intro
  • 6:00 The Bitter Lesson
  • 10:00 The introduction of GPT-3
  • 16:45 AI catastrophic risk (paper clip example)
  • 23:00 Reward hacking
  • 27:30 Approaching intelligence
  • 32:00 Wrap-up


APPLE | GOOGLE | SPOTIFY | OTHERS

Editor’s note: The TDS Podcast is hosted by Jeremie Harris, who is the co-founder of Gladstone, an AI safety startup. Every week, Jeremie chats with researchers and business leaders at the forefront of the field to unpack the most pressing questions around data science, machine learning, and AI.

Two years ago, I told the Toward Data Science community that we were going to take this podcast in a different direction. Until then, our guests and our conversations had been focused on industry applications of machine learning and data visualization, data science career development strategies, and other related topics. And that made sense: back then I was running a company called SharpestMinds, which ran a mentorship program for data scientists and machine learning engineers. I co-founded SharpestMinds with my brother Ed, who’s been on the podcast before. SharpestMinds was the third company we’d built together, but importantly it was the first one that actually worked. We lucked out, had a great team around us, and the company became the world’s largest data science and machine learning mentorship program based on income share agreements.

But in early 2020, everything changed. That’s when OpenAI announced to the world that they’d created GPT-3, and more importantly, it’s the moment we realized that the recipe that OpenAI used to build GPT-3 could very likely be extended much, much further — potentially all the way to building general-purpose AIs that might meet our exceed human intelligence across a wide range of tasks.

At the same time, Ed and I were aware of the concerns of the AI alignment community, which for years had been arguing that these sorts of broadly intelligent AI systems may eventually become uncontrollable, and pose catastrophic risks. As wild as those claims seemed at first, the more we looked into them over the years, the harder it became to deny that they were getting at something that was actually pretty legitimate. And as we started to talk to AI risk skeptics, we came away pretty disappointed and underwhelmed with their arguments.

And so, with these concerns about potential catastrophic risks from advanced AI in one hand, and a promising path towards building precisely those kinds of systems in the other, we felt that we couldn’t justify working on anything other than AI safety. So we made the difficult decision to leave SharpestMinds to our two earliest employees, and take the plunge into the world of AI safety, with no real idea about where to start.

Around that time, I reached out to the editorial team at Towards Data Science about our decision. They put forward an idea: maybe we could use the TDS podcast as a platform to explore AI safety and share some of those ideas with the wider world, as we continued our journey into AI alignment.

I loved the concept. And that’s why two years ago, we moved the podcast in that new direction. Since then, a lot has happened behind the scenes. For one, Ed and I co-founded an AI safety company with a good friend of ours who had also been following the AI safety story closely, and who was a senior leader in the US defense world. We’ll have some exciting things to announce there soon, and I can genuinely say that I never expected that we’d be able to make the impact we have in AI safety this quickly.

But the concerns that brought us here have also come into sharper focus. GPT-3 did indeed trigger a revolution in AI capabilities, and the field does seem to be coming to the view that AI scaling may get us much of the way to human-level or superhuman AI. At the same time, we’ve started to see empirical evidence that suggests we should expect catastrophic outcomes from developing those kinds of systems by default.

And as we’ve gone deeper into this space, my time has been consumed more and more by keeping up with the state of the art of AI, and AI safety, which has left me with less time to explore other topics on the podcast. And that’s challenging, because I know that many of our listeners — the TDS community — do still want to hear about other things. Things like data visualization and pipeline development and data science tooling.

So, as important as state of the art AI is, and as important as AI safety is, I think the deeper exploitation of those topics that I have to do does belong in a slightly different venue. For that reason, this will be the last episode of the TDS podcast, at least for now. I’ll still be exploring these topics going forward on the new Gladstone AI podcast, which you can find linked in the show notes, and we’ll continue to publish those episodes on the TDS blog, but as an independent project better suited to exploring these specific ideas. If you’d like to follow me there, I’d be honoured to have you join me for the rest of this journey.

In this final episode, I’ll offer my perspective on the last two years of AI progress, and what I think it means for everything from AI safety to the future of humanity.

You can find the links I discuss in the episode below.

  • The new Gladstone AI podcast, where I’ll be talking about one new, cutting-edge AI model each week in plain English (its use cases, its potential malicious applications, and its relevance to AI alignment risk).
  • My upcoming book: Quantum Mechanics Made Me Do It.
  • Our two episodes discussing instrumental goals in AI safety: Alex Turner and Edouard Harris.
  • 80,000 Hours: a website where you can get advice on how to contribute to solving AI safety and AI policy problems.
  • Concrete Problems in AI Safety: an oldie but a goodie, that introduces a lot of the central problems in AI alignment that remain open to this day.
  • Some episodes to check out if you’re interested in AI policy (in order of recency): Ryan Fedasiuk, Rosie Campbell, Ben Garfinkel, and Helen Toner.
  • Some episodes to check out if you’re interested in technical AI alignment (in order of recency): Irina Rish, Alex Turner, Jan Leike, Daniel Filan, Andy Jones, Evan Hubinger, Brian Christian, Ryan Carey, Stuart Armstrong, Ethan Perez, Anders Sandberg, David Krueger, Rob Miles, Dylan Hadfield-Menell, Rohin Shah, and Edouard Harris.
  • My appearances discussing AI safety on other podcasts: The Evan Solomon Show, Policy Options, Super Data Science, The Banana Data Podcast, Ken Jee’s podcast, Calls From The Future, and Data Bytes.
  • Gladstone AI’s AI model tracker: aitracker.org.

Chapters:

  • 0:00 Intro
  • 6:00 The Bitter Lesson
  • 10:00 The introduction of GPT-3
  • 16:45 AI catastrophic risk (paper clip example)
  • 23:00 Reward hacking
  • 27:30 Approaching intelligence
  • 32:00 Wrap-up

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment