Techno Blender
Digitally Yours.

Why More Diversity in Competitive AI Roles Requires the Reinvention of the Internship Pipeline

0 68


AI systems learn from data generated by human decisions, so they are reflecting back to us our own society’s biases encoded and amplified into our technology. Less than 5% of the workforce at the leading tech companies were Black or Latinx, a study by AI Now Institute found. The more people from different backgrounds you have working with data, the more likely it will be that the team will be able to spot bias before it is too late. The more diverse representation across race, gender, sexual orientation, age, economic conditions and more makes for more accurate and ethical AI systems.

Nathan Esquenazi

Co-Founder at CodePath – a 501(c)(3) nonprofit transforming computer science education. Learn more @ codepath.org

Addressing the prejudices in our artificial intelligence systems is one of the most urgent challenges of the current age. Why? Because the consequences of biases in these systems are at best problematic and at worst deadly. 

After almost 2 decades of experience as a professional engineer, I view artificial intelligence as a mirror being held up to humanity. After all, AI systems learn from data generated by human decisions, so they are simply reflecting back to us our own society’s biases encoded and amplified into our technology.

In an infamous example from 2015, a Google engineer noticed that the company’s image-recognition system was labeling Black people as “gorillas.” When these systems, for example, go on to be used to train Tesla’s autonomous vehicles, their inability to accurately identify humans with darker skin tones can have life-threatening results.

Although terrifying, these examples should not mean we have to lose all hope in the further development of AI-enabled technology. In fact, there are plenty of tools that engineering teams can use to identify and remove biases in their data and algorithmic models in order to prevent unjust and disastrous outcomes.

The key, however, starts with having diversity within the membership of the teams themselves tasked with building the AI systems shaping our present and future.

Building AI teams with diverse representation across race, gender, sexual orientation, age, economic conditions, and more make for more accurate and ethical AI systems. The more people from different backgrounds you have working with data, the more likely it will be that the team will be able to spot bias before it is too late. Diversity also drives more creative thinking as well as improves your ability to scale to larger and broader markets.

How to Hire a More Diverse AI Team

As it stands, tech companies are severely lacking in diversity. A 2019 study by AI Now Institute found that less than 5% of the workforce at the leading technology companies were Black or Latinx. Meanwhile, the U.S. population is made up of 13.4% Black and 18.4% Latinx people.

The explanations for these gaps can be attributed to a multitude of factors – including historical and structural barriers faced by minority students and prospective job candidates.

Data shows that as many as 80% of Black, Latinx, Indigenous, first-generation, and low-income students who start a CS degree drop out of their program.

Using this data, we chose to intervene at a key point along a student’s trajectory – early in their college years – and build programming to support their continued investment in computer science and eventual success in securing a degree as well as entering into competitive internships and full-time technical software roles.

It is critical that paid internships building practical skills, connections, and confidence should be baked in early and throughout a student’s learning journey.

The goal of these early internships, which are designed especially for rising juniors, is to provide many underrepresented students with technical experience and support early in their college careers before they decide to drop out or switch majors.

For example, The Summer Internship for Tech Excellence (SITE) program for underrepresented CS students has already seen success with nearly 86% of 2021 graduates having secured junior year paid internship opportunities. Another pre-internship program called Futureforce Tech Launchpad is kicking off with its first cohort of 25 pre-interns in June.

These companies and more are some of the leaders that will be shaping the future of artificial intelligence we are stepping into, which is why it is essential that they employ diversity on their product teams.

But even if a company is not directly working on AI, they are probably using it for hiring, which means that there are probably exclusions continuously embedded into the software being used to assess prospective talent.

This only further demonstrates the need for more human-centered bridges – fostering relationships and providing mentorship – between solid underrepresented engineering candidates and working tech professionals.

AI is everywhere. And unless we start to intentionally recruit more diverse teams to design and build it, we are going to continue to see problematic and dangerous effects of this technology on our world. 

However, through cross-sector partnerships – bringing together universities, technology companies, and other nonprofits – we can reinvent the pipeline to the tech workforce and perhaps begin to see a more equitable world reflected back at us through the mirror of the artificial intelligence we build.

Nathan Esquenazi HackerNoon profile picture
by Nathan Esquenazi @nesquena.Co-Founder at CodePath – a 501(c)(3) nonprofit transforming computer science education. Learn more @ codepath.org

Read my stories

L O A D I N G
. . . comments & more!


AI systems learn from data generated by human decisions, so they are reflecting back to us our own society’s biases encoded and amplified into our technology. Less than 5% of the workforce at the leading tech companies were Black or Latinx, a study by AI Now Institute found. The more people from different backgrounds you have working with data, the more likely it will be that the team will be able to spot bias before it is too late. The more diverse representation across race, gender, sexual orientation, age, economic conditions and more makes for more accurate and ethical AI systems.

Nathan Esquenazi HackerNoon profile picture

Nathan Esquenazi

Co-Founder at CodePath – a 501(c)(3) nonprofit transforming computer science education. Learn more @ codepath.org

Addressing the prejudices in our artificial intelligence systems is one of the most urgent challenges of the current age. Why? Because the consequences of biases in these systems are at best problematic and at worst deadly. 

After almost 2 decades of experience as a professional engineer, I view artificial intelligence as a mirror being held up to humanity. After all, AI systems learn from data generated by human decisions, so they are simply reflecting back to us our own society’s biases encoded and amplified into our technology.

In an infamous example from 2015, a Google engineer noticed that the company’s image-recognition system was labeling Black people as “gorillas.” When these systems, for example, go on to be used to train Tesla’s autonomous vehicles, their inability to accurately identify humans with darker skin tones can have life-threatening results.

Although terrifying, these examples should not mean we have to lose all hope in the further development of AI-enabled technology. In fact, there are plenty of tools that engineering teams can use to identify and remove biases in their data and algorithmic models in order to prevent unjust and disastrous outcomes.

The key, however, starts with having diversity within the membership of the teams themselves tasked with building the AI systems shaping our present and future.

Building AI teams with diverse representation across race, gender, sexual orientation, age, economic conditions, and more make for more accurate and ethical AI systems. The more people from different backgrounds you have working with data, the more likely it will be that the team will be able to spot bias before it is too late. Diversity also drives more creative thinking as well as improves your ability to scale to larger and broader markets.

How to Hire a More Diverse AI Team

As it stands, tech companies are severely lacking in diversity. A 2019 study by AI Now Institute found that less than 5% of the workforce at the leading technology companies were Black or Latinx. Meanwhile, the U.S. population is made up of 13.4% Black and 18.4% Latinx people.

The explanations for these gaps can be attributed to a multitude of factors – including historical and structural barriers faced by minority students and prospective job candidates.

Data shows that as many as 80% of Black, Latinx, Indigenous, first-generation, and low-income students who start a CS degree drop out of their program.

Using this data, we chose to intervene at a key point along a student’s trajectory – early in their college years – and build programming to support their continued investment in computer science and eventual success in securing a degree as well as entering into competitive internships and full-time technical software roles.

It is critical that paid internships building practical skills, connections, and confidence should be baked in early and throughout a student’s learning journey.

The goal of these early internships, which are designed especially for rising juniors, is to provide many underrepresented students with technical experience and support early in their college careers before they decide to drop out or switch majors.

For example, The Summer Internship for Tech Excellence (SITE) program for underrepresented CS students has already seen success with nearly 86% of 2021 graduates having secured junior year paid internship opportunities. Another pre-internship program called Futureforce Tech Launchpad is kicking off with its first cohort of 25 pre-interns in June.

These companies and more are some of the leaders that will be shaping the future of artificial intelligence we are stepping into, which is why it is essential that they employ diversity on their product teams.

But even if a company is not directly working on AI, they are probably using it for hiring, which means that there are probably exclusions continuously embedded into the software being used to assess prospective talent.

This only further demonstrates the need for more human-centered bridges – fostering relationships and providing mentorship – between solid underrepresented engineering candidates and working tech professionals.

AI is everywhere. And unless we start to intentionally recruit more diverse teams to design and build it, we are going to continue to see problematic and dangerous effects of this technology on our world. 

However, through cross-sector partnerships – bringing together universities, technology companies, and other nonprofits – we can reinvent the pipeline to the tech workforce and perhaps begin to see a more equitable world reflected back at us through the mirror of the artificial intelligence we build.

Nathan Esquenazi HackerNoon profile picture
by Nathan Esquenazi @nesquena.Co-Founder at CodePath – a 501(c)(3) nonprofit transforming computer science education. Learn more @ codepath.org

Read my stories

L O A D I N G
. . . comments & more!

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment