Techno Blender
Digitally Yours.

Do You Also Feel AI Is Going Too Fast? | by Alberto Romero | Oct, 2022

0 84


I have a feeling of shared excitement mixed with visceral haste to avoid missing out, and sheer information overwhelm

The Flash. Credit: Author via Midjourney

I suspect I’m not the only one who feels AI is going too fast.

I’ve read so many comments on forums and social media about this that I’ve concluded the sensation is shared among insiders and witnesses alike: AI is apparently progressing so fast we can’t keep up — not even those of us who do this for a living.

It’s not the first time this idea comes up (AI has been accelerating since the early 2010s), but it’s the first time I’ve seen the feeling become so prevalent, so broadly apparent that it’s tangible — like the sudden calm before the storm.

Andrej Karpathy’s Tweet

Before we continue, let me set a premise that may or may not be right: Let’s assume the AI field is, indeed, advancing as fast as we perceive it to be.

It could be a case of “looks are deceiving” — the amount of published papers doesn’t necessarily correlate with meaningful progress. I don’t care much about that in the first section. I focus on the sensation, not on the underlying reality.

In the final section, however, I cover that possibility to give you tools and thought-provoking arguments so you can reassess your stances and approaches to AI knowledge. You know I like nuanced takes.

That said, I could approach this topic from a hundred different perspectives. Let’s make it clear what this article is and what it isn’t.

What this article is

This is my main purpose: Capturing that feeling of shared excitement mixed with visceral haste to avoid missing out, and sheer information overwhelm. It feels unique to current times in AI.

If you follow the news and trends weekly, you know what I’m talking about: The fear-inducing sensation of being lost in an increasingly complex world that slips through your fingers (don’t confuse this with AGI or the Singularity, please).

This article is also about how to handle that sensation and its consequences (it may not correlate with reality as much as you think). Being a knowledgeable individual has its perks, but also its perils.

Being too close to AI progress forces you to learn these: How to avoid jumping into trendy bandwagons, how to set aside the fear of missing out (FOMO), how to maintain your hype at healthy levels, and how to keep your critical thinking sharp.

What this article isn’t

This article isn’t a thorough evaluation of the truth of the statement “AI is going too fast,” although the second section covers the tools you need to assess it yourself.

It isn’t about whether this unconstrained progress is valuable or not in getting us closer to our goals. AI people have different reasons to move the field forward: from building useful products to make the world more creative, to understand the human mind, to build superintelligence.

Yet, this apparent progress could also simply be PR stunts to extol particular companies. I won’t go into that here.

It isn’t about the reasons that are making AI advance so fast now specifically instead of say, 5 years ago.

And it isn’t about how to slow down progress — although some people have repeatedly argued this should be considered, not dismissed.

Emily M. Bender’s Tweet

Now that we’re all clear, let’s go with the first section.

This article is a selection from The Algorithmic Bridge, an educational newsletter whose purpose is to bridge the gap between algorithms and people. It will help you understand the impact AI has in your life and develop the tools to better navigate the future.

The Cambrian AI explosion took off after a deep learning-based computer vision algorithm amply beat competitors at the ImageNet challenge in 2012.

Since then, AI has been rapidly progressing. Progress hasn’t been constant, but accelerated. If we focus on the 2012–2022 decade, the second half has seen many more advances than the first half.

Credit: Krenn et al.

(The amount of published papers is a poor metric to measure this, but serves as a proxy to make my point.)

However, this is natural in growing scientific fields. Progress drives more progress. We’re used to this. What seems to be different about AI is that not only progress, but the rate at which it increases, seems to be accelerating.

A common form of this phenomenon is what people call exponential progress (although, as Physicist Theodore Modis argues, “nothing in nature follows a pure exponential”).

Why do people have this feeling about AI? I can find many reasons: AI is getting more popular. Investors and companies are devoting more resources to research and development. Papers and publications are receiving more attention. Proofs of concept are being shipped into products and services more often. People have access to the latest models. And each breakthrough entails subsequent breakthroughs.

To illustrate this, let me take you on a one-paragraph simplified ride of the last five years of language research:

The transformer architecture, which Google published in 2017, sparked an interest in language modeling. This allowed OpenAI to devise the scaling laws for large models, which led them to build GPT-3. This prompted other big tech companies and universities to work on their own models and publish more papers, which made the news everywhere, every month. This produced an emergence of new market opportunities that incentivized people to found new companies, which led to more competition. This, in turn, motivated open source initiatives, which facilitated people’s access to the latest research in the form of apps and websites.

All of that in barely 4 years. Crazy fast.

But this year? This year has been the wildest, arguably in AI history. In terms of new research papers, new applications, new models, new companies… And, most importantly, in terms of the potential societal and economic impact of the discoveries that are taking place.

The rate of progress in 2022 has accelerated to such a point that even insiders are now feeling overwhelmed. And I’m not talking about your average engineer. You just saw Andrej Karpathy’s Tweet — he’s one of the most brilliant young minds in AI right now (now independent, previously @ Tesla, OpenAI).

2022 has been (and it’s being) the year of generative AI and diffusion models (although the feeling I’m trying to capture is easily extrapolated to other branches, like biology-focused AI research or the well-known subfield of language understanding).

The latest news on generative AI — which has prompted me to write this article — is that companies are already creating text-to-video models (Make-A-Video and Phenaki). We’re still digesting the rapid development of image generation models like DALL·E and Stable Diffusion and companies are already jumping into the next great breakthrough.

Just look at this 2-minute video generated with a continuous sequence of prompts.

And it’s not only Karpathy. This feeling of out-of-control progress is widely shared by people who follow these developments closely. They’re overwhelmed. I’m talking about people who know a model or paper is out the same day they’re published. You can’t get closer than that. And it’s those people who are “sounding the alarms.”

And I’m not referring to comments of the form “Wow, how fast this is going.” No, we’re at a point where people are beginning to say: “Hey, this is going too fast, maybe we should slow down,” or “I can’t keep up even if I’m trying as hard as I can.” Just look at the quote tweets here:

Meta AI’s Tweet

Or here:

AK’s Tweet

Of course, not everyone is taking this sensation as a problematic sign. Some are more excited than ever.

This is what exponential growth feels like (even when it isn’t actually exponential).

Here’s what Karpathy answered when someone asked him about what AI will look like in 30 years:

Andrej Karpathy’s Tweet

One thing is to know rationally — as a distant thought — that, in the future, AI will radically change the world and we won’t be able to keep up with advances anymore. Another, very different thing, is to feel it inside already.

This isn’t to say current AI acceleration is leading us to AGI or sentient machines — I don’t believe so — but the feeling of accelerated progress, overwhelming information, and strong FOMO is very real for so many of us.

And it’s because of this ubiquitous feeling that this second section is so important.

Let me start with this: Even if you feel AI is going too fast, you could be wrong. The real-world effects of AI may not be as impressive as they may seem from a close-up perspective. This is a natural implication of not zooming out from developments often enough to look into the real world.

The rate of progress may be supersonic at Google and OpenAI and, at the same time, 80% (made up) of the world hasn’t even heard about GPT-3. I mean, almost 40% of the global population doesn’t have internet access.

However, it can be partially real: Generative AI, in particular, is enjoying a combination of high freedom and low friction when it comes to creating models and converting them into ready-to-use apps thanks to open-source trends. It’s a matter of weeks or even days.

When a tangible reality merges with the appearance of progress, it’s harder to dismiss the feeling that it’s getting out of control.

That said, I’m not going to argue here about AI’s true rate of progress.

I won’t try to convince you that it’s going slower than you think. And I won’t try to convince you that, even if it’s advancing, the future you may foresee isn’t the direction we’re going into.

What I care about in this section is giving you my arguments of what happens when we feel overwhelmed by information, haste to know more, and FOMO. And what to do to counter those sensations and their consequences.

“Sorry, no time to create meaningful evaluation. Gotta keep up with arXiv!”

This Tweet by linguist Emily M. Bender perfectly captures the first idea:

Emily M. Bender’s Tweet

Her scathing sarcasm is on point. I agree with her in that the immediate consequence of feeling you can’t keep up with AI progress is to devote all your resources to trying — dismissing other aspects.

The haste prompts us to renounce tackling seemingly non-critical tasks like reflection, analysis, and evaluation of the implications and repercussions of AI research and development.

Sadly, this doesn’t seem to be isolated for witnesses, like me or you. We just believe AI may go too fast. People building these systems also face this problem. And they don’t believe, they know.

Even if progress isn’t as significant as it feels, they must keep writing papers and building models (whatever the purpose). This makes them unable to spend enough time assessing the societal impact of AI — some of which isn’t precisely good.

AI safety and AI ethics people (which sound like they’re solving similar problems, but nothing further from the truth) are the only ones trying to compensate for the accelerating nature of the field.

But it isn’t working as well as they wished. The former are hyper-focused on the alignment problem (which in my opinion is less urgent than societal and cultural issues happening here and now) and the latter — along with people who dismiss unbridled enthusiasm as hype — are branded by many as “AI critics” or “AI deniers”.

Melanie Mitchell’s Tweet

Leaving these two groups aside, what people are feeling now is strong FOMO. Fear of missing out on the next big thing, the low-hanging opportunities that constantly arise, or the ability to prepare for an impending AI-powered future.

And what is this a recipe for? You guessed it, AI hype.

More FOMO → less reflection → more hype → more FOMO → …

The vicious cycle that arises from feeling you’re missing out is hard to break.

You devote more resources to keep up with progress, which reduces your ability and time to reflect on the value, truth, or goals of that apparent progress.

This causes you to be less aware of the shades that surround AI developments, which puts you in the perfect place to be a victim of the hype induced by exaggerated headlines and unapologetic PR stunts.

This is super common.

The only cure for this is a constant, unconditional healthy skepticism towards any new paper or development you encounter. I try to apply this to my reading and my writing.

Critical thinking shuts down in the face of overwhelming information

This is one of the most idiosyncratic problems of current times. Goes far beyond AI. It happened with COVID. It’s happening with the Russia-Ukraine war. And it’ll continue to happen. We’re fed much, much more information than we can possibly digest.

Digesting info and reflecting on it share the same mental pool of resources. If the amount of info we have to digest to be up to date surpasses a certain threshold, people tend to shut down critical thinking. The reason appears to be that it costs much more effort than simply swallowing whatever news is coming your way.

Without time for reflection, people simply believe what they read.

People fighting AI hype won’t be enough. Threads on Twitter or posts on Substack aren’t going to be enough either. People under the influence of ever-growing hype will become the main victims of ideas like “The Singularity Is Near,” AGI around the corner, or sentient AIs.

To finish this piece, I’m going to share with you some Tweets that emphasize the importance of critical thinking when there’s information overload, undisclosed interests, and a shared feeling of urgent optimism:

Tante’s Tweet
Talia Ringer’s Tweet
Gary Marcus’ Tweet
Emily M. Bender’s Tweet

I predict that articles like this one (I may be biased) — that try to find a balance between the excitement of AI innovations and their shortcomings — are only going to become more necessary if we continue the current path.




I have a feeling of shared excitement mixed with visceral haste to avoid missing out, and sheer information overwhelm

The Flash. Credit: Author via Midjourney

I suspect I’m not the only one who feels AI is going too fast.

I’ve read so many comments on forums and social media about this that I’ve concluded the sensation is shared among insiders and witnesses alike: AI is apparently progressing so fast we can’t keep up — not even those of us who do this for a living.

It’s not the first time this idea comes up (AI has been accelerating since the early 2010s), but it’s the first time I’ve seen the feeling become so prevalent, so broadly apparent that it’s tangible — like the sudden calm before the storm.

Andrej Karpathy’s Tweet

Before we continue, let me set a premise that may or may not be right: Let’s assume the AI field is, indeed, advancing as fast as we perceive it to be.

It could be a case of “looks are deceiving” — the amount of published papers doesn’t necessarily correlate with meaningful progress. I don’t care much about that in the first section. I focus on the sensation, not on the underlying reality.

In the final section, however, I cover that possibility to give you tools and thought-provoking arguments so you can reassess your stances and approaches to AI knowledge. You know I like nuanced takes.

That said, I could approach this topic from a hundred different perspectives. Let’s make it clear what this article is and what it isn’t.

What this article is

This is my main purpose: Capturing that feeling of shared excitement mixed with visceral haste to avoid missing out, and sheer information overwhelm. It feels unique to current times in AI.

If you follow the news and trends weekly, you know what I’m talking about: The fear-inducing sensation of being lost in an increasingly complex world that slips through your fingers (don’t confuse this with AGI or the Singularity, please).

This article is also about how to handle that sensation and its consequences (it may not correlate with reality as much as you think). Being a knowledgeable individual has its perks, but also its perils.

Being too close to AI progress forces you to learn these: How to avoid jumping into trendy bandwagons, how to set aside the fear of missing out (FOMO), how to maintain your hype at healthy levels, and how to keep your critical thinking sharp.

What this article isn’t

This article isn’t a thorough evaluation of the truth of the statement “AI is going too fast,” although the second section covers the tools you need to assess it yourself.

It isn’t about whether this unconstrained progress is valuable or not in getting us closer to our goals. AI people have different reasons to move the field forward: from building useful products to make the world more creative, to understand the human mind, to build superintelligence.

Yet, this apparent progress could also simply be PR stunts to extol particular companies. I won’t go into that here.

It isn’t about the reasons that are making AI advance so fast now specifically instead of say, 5 years ago.

And it isn’t about how to slow down progress — although some people have repeatedly argued this should be considered, not dismissed.

Emily M. Bender’s Tweet

Now that we’re all clear, let’s go with the first section.

This article is a selection from The Algorithmic Bridge, an educational newsletter whose purpose is to bridge the gap between algorithms and people. It will help you understand the impact AI has in your life and develop the tools to better navigate the future.

The Cambrian AI explosion took off after a deep learning-based computer vision algorithm amply beat competitors at the ImageNet challenge in 2012.

Since then, AI has been rapidly progressing. Progress hasn’t been constant, but accelerated. If we focus on the 2012–2022 decade, the second half has seen many more advances than the first half.

Credit: Krenn et al.

(The amount of published papers is a poor metric to measure this, but serves as a proxy to make my point.)

However, this is natural in growing scientific fields. Progress drives more progress. We’re used to this. What seems to be different about AI is that not only progress, but the rate at which it increases, seems to be accelerating.

A common form of this phenomenon is what people call exponential progress (although, as Physicist Theodore Modis argues, “nothing in nature follows a pure exponential”).

Why do people have this feeling about AI? I can find many reasons: AI is getting more popular. Investors and companies are devoting more resources to research and development. Papers and publications are receiving more attention. Proofs of concept are being shipped into products and services more often. People have access to the latest models. And each breakthrough entails subsequent breakthroughs.

To illustrate this, let me take you on a one-paragraph simplified ride of the last five years of language research:

The transformer architecture, which Google published in 2017, sparked an interest in language modeling. This allowed OpenAI to devise the scaling laws for large models, which led them to build GPT-3. This prompted other big tech companies and universities to work on their own models and publish more papers, which made the news everywhere, every month. This produced an emergence of new market opportunities that incentivized people to found new companies, which led to more competition. This, in turn, motivated open source initiatives, which facilitated people’s access to the latest research in the form of apps and websites.

All of that in barely 4 years. Crazy fast.

But this year? This year has been the wildest, arguably in AI history. In terms of new research papers, new applications, new models, new companies… And, most importantly, in terms of the potential societal and economic impact of the discoveries that are taking place.

The rate of progress in 2022 has accelerated to such a point that even insiders are now feeling overwhelmed. And I’m not talking about your average engineer. You just saw Andrej Karpathy’s Tweet — he’s one of the most brilliant young minds in AI right now (now independent, previously @ Tesla, OpenAI).

2022 has been (and it’s being) the year of generative AI and diffusion models (although the feeling I’m trying to capture is easily extrapolated to other branches, like biology-focused AI research or the well-known subfield of language understanding).

The latest news on generative AI — which has prompted me to write this article — is that companies are already creating text-to-video models (Make-A-Video and Phenaki). We’re still digesting the rapid development of image generation models like DALL·E and Stable Diffusion and companies are already jumping into the next great breakthrough.

Just look at this 2-minute video generated with a continuous sequence of prompts.

And it’s not only Karpathy. This feeling of out-of-control progress is widely shared by people who follow these developments closely. They’re overwhelmed. I’m talking about people who know a model or paper is out the same day they’re published. You can’t get closer than that. And it’s those people who are “sounding the alarms.”

And I’m not referring to comments of the form “Wow, how fast this is going.” No, we’re at a point where people are beginning to say: “Hey, this is going too fast, maybe we should slow down,” or “I can’t keep up even if I’m trying as hard as I can.” Just look at the quote tweets here:

Meta AI’s Tweet

Or here:

AK’s Tweet

Of course, not everyone is taking this sensation as a problematic sign. Some are more excited than ever.

This is what exponential growth feels like (even when it isn’t actually exponential).

Here’s what Karpathy answered when someone asked him about what AI will look like in 30 years:

Andrej Karpathy’s Tweet

One thing is to know rationally — as a distant thought — that, in the future, AI will radically change the world and we won’t be able to keep up with advances anymore. Another, very different thing, is to feel it inside already.

This isn’t to say current AI acceleration is leading us to AGI or sentient machines — I don’t believe so — but the feeling of accelerated progress, overwhelming information, and strong FOMO is very real for so many of us.

And it’s because of this ubiquitous feeling that this second section is so important.

Let me start with this: Even if you feel AI is going too fast, you could be wrong. The real-world effects of AI may not be as impressive as they may seem from a close-up perspective. This is a natural implication of not zooming out from developments often enough to look into the real world.

The rate of progress may be supersonic at Google and OpenAI and, at the same time, 80% (made up) of the world hasn’t even heard about GPT-3. I mean, almost 40% of the global population doesn’t have internet access.

However, it can be partially real: Generative AI, in particular, is enjoying a combination of high freedom and low friction when it comes to creating models and converting them into ready-to-use apps thanks to open-source trends. It’s a matter of weeks or even days.

When a tangible reality merges with the appearance of progress, it’s harder to dismiss the feeling that it’s getting out of control.

That said, I’m not going to argue here about AI’s true rate of progress.

I won’t try to convince you that it’s going slower than you think. And I won’t try to convince you that, even if it’s advancing, the future you may foresee isn’t the direction we’re going into.

What I care about in this section is giving you my arguments of what happens when we feel overwhelmed by information, haste to know more, and FOMO. And what to do to counter those sensations and their consequences.

“Sorry, no time to create meaningful evaluation. Gotta keep up with arXiv!”

This Tweet by linguist Emily M. Bender perfectly captures the first idea:

Emily M. Bender’s Tweet

Her scathing sarcasm is on point. I agree with her in that the immediate consequence of feeling you can’t keep up with AI progress is to devote all your resources to trying — dismissing other aspects.

The haste prompts us to renounce tackling seemingly non-critical tasks like reflection, analysis, and evaluation of the implications and repercussions of AI research and development.

Sadly, this doesn’t seem to be isolated for witnesses, like me or you. We just believe AI may go too fast. People building these systems also face this problem. And they don’t believe, they know.

Even if progress isn’t as significant as it feels, they must keep writing papers and building models (whatever the purpose). This makes them unable to spend enough time assessing the societal impact of AI — some of which isn’t precisely good.

AI safety and AI ethics people (which sound like they’re solving similar problems, but nothing further from the truth) are the only ones trying to compensate for the accelerating nature of the field.

But it isn’t working as well as they wished. The former are hyper-focused on the alignment problem (which in my opinion is less urgent than societal and cultural issues happening here and now) and the latter — along with people who dismiss unbridled enthusiasm as hype — are branded by many as “AI critics” or “AI deniers”.

Melanie Mitchell’s Tweet

Leaving these two groups aside, what people are feeling now is strong FOMO. Fear of missing out on the next big thing, the low-hanging opportunities that constantly arise, or the ability to prepare for an impending AI-powered future.

And what is this a recipe for? You guessed it, AI hype.

More FOMO → less reflection → more hype → more FOMO → …

The vicious cycle that arises from feeling you’re missing out is hard to break.

You devote more resources to keep up with progress, which reduces your ability and time to reflect on the value, truth, or goals of that apparent progress.

This causes you to be less aware of the shades that surround AI developments, which puts you in the perfect place to be a victim of the hype induced by exaggerated headlines and unapologetic PR stunts.

This is super common.

The only cure for this is a constant, unconditional healthy skepticism towards any new paper or development you encounter. I try to apply this to my reading and my writing.

Critical thinking shuts down in the face of overwhelming information

This is one of the most idiosyncratic problems of current times. Goes far beyond AI. It happened with COVID. It’s happening with the Russia-Ukraine war. And it’ll continue to happen. We’re fed much, much more information than we can possibly digest.

Digesting info and reflecting on it share the same mental pool of resources. If the amount of info we have to digest to be up to date surpasses a certain threshold, people tend to shut down critical thinking. The reason appears to be that it costs much more effort than simply swallowing whatever news is coming your way.

Without time for reflection, people simply believe what they read.

People fighting AI hype won’t be enough. Threads on Twitter or posts on Substack aren’t going to be enough either. People under the influence of ever-growing hype will become the main victims of ideas like “The Singularity Is Near,” AGI around the corner, or sentient AIs.

To finish this piece, I’m going to share with you some Tweets that emphasize the importance of critical thinking when there’s information overload, undisclosed interests, and a shared feeling of urgent optimism:

Tante’s Tweet
Talia Ringer’s Tweet
Gary Marcus’ Tweet
Emily M. Bender’s Tweet

I predict that articles like this one (I may be biased) — that try to find a balance between the excitement of AI innovations and their shortcomings — are only going to become more necessary if we continue the current path.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment